Commit 0ac837b3 by Michael DeHaan

Merge branch 'devel' back into master

Conflicts:
	CHANGELOG.md
	VERSION
	docs/man/man1/ansible-playbook.1
	docs/man/man1/ansible.1
	lib/ansible/__init__.py
	lib/ansible/playbook.py
	lib/ansible/utils.py
	packaging/rpm/ansible.spec
parents 5a156d68 56c62683
Ansible Changes By Release
==========================
0.4 "Unchained" ------- in progress
0.4 "Unchained" ------- May 23, 2012
* See the CHANGELOG.md file on the devel branch for a summary
Internals/Core
* internal inventory API now more object oriented, parsers decoupled
* async handling improvements
* misc fixes for running ansible on OS X (overlord only)
* sudo improvements, now works much more smoothly
* sudo to a particular user with -U/--sudo-user, or using 'sudo_user: foo' in a playbook
* --private-key CLI option to work with pem files
Inventory
* can use -i host1,host2,host3:port to specify hosts not in inventory (replaces --override-hosts)
* ansible INI style format can do groups of groups [groupname:children] and group vars [groupname:vars]
* groups and users module takes an optional system=yes|no on creation (default no)
* list of hosts in playbooks can be expressed as a YAML list in addition to ; delimited
Playbooks
* variables can be replaced like ${foo.nested_hash_key.nested_subkey[array_index]}
* unicode now ok in templates (assumes utf8)
* able to pass host specifier or group name in to "hosts:" with --extra-vars
* ansible-pull script and example playbook (extreme scaling, remediation)
* inventory_hostname variable available that contains the value of the host as ansible knows it
* variables in the 'all' section can be used to define other variables based on those values
* 'group_names' is now a variable made available to templates
* first_available_file feature, see selective_file_sources.yml in examples/playbooks for info
* --extra-vars="a=2 b=3" etc, now available to inject parameters into playbooks from CLI
Incompatible Changes
* jinja2 is only usable in templates, not playbooks, use $foo instead
* --override-hosts removed, can use -i with comma notation (-i "ahost,bhost")
* modules can no longer include stderr output (paramiko limitation from sudo)
Module Changes
* tweaks to SELinux implementation for file module
* fixes for yum module corner cases on EL5
* file module now correctly returns the mode in octal
* fix for symlink handling in the file module
* service takes an enable=yes|no which works with chkconfig or updates-rc.d as appropriate
* service module works better on Ubuntu
* git module now does resets and such to work more smoothly on updates
* modules all now log to syslog
* enabled=yes|no on a service can be used to toggle chkconfig & updates-rc.d states
* git module supports branch=
* service fixes to better detect status using return codes of the service script
* custom facts provided by the setup module mean no dependency on Ruby, facter, or ohai
* service now has a state=reloaded
* raw module for bootstrapping and talking to routers w/o Python, etc
Misc Bugfixes
* fixes for variable parsing in only_if lines
* misc fixes to key=value parsing
* variables with mixed case now legal
* fix to internals of hacking/test-module development script
0.3 "Baluchitherium" -- April 23, 2012
......
......@@ -21,9 +21,8 @@
import sys
import getpass
import time
import ansible.runner
from ansible.runner import Runner
import ansible.constants as C
from ansible import utils
from ansible import errors
......@@ -40,7 +39,6 @@ class Cli(object):
def __init__(self):
self.stats = callbacks.AggregateStats()
self.callbacks = callbacks.CliRunnerCallbacks()
self.silent_callbacks = callbacks.DefaultRunnerCallbacks()
# ----------------------------------------------
......@@ -54,7 +52,6 @@ class Cli(object):
parser.add_option('-m', '--module-name', dest='module_name',
help="module name to execute (default=%s)" % C.DEFAULT_MODULE_NAME,
default=C.DEFAULT_MODULE_NAME)
options, args = parser.parse_args()
self.callbacks.options = options
......@@ -73,8 +70,8 @@ class Cli(object):
inventory_manager = inventory.Inventory(options.inventory)
hosts = inventory_manager.list_hosts(pattern)
if len(hosts) == 0:
print >>sys.stderr, "No hosts matched"
sys.exit(1)
print >>sys.stderr, "No hosts matched"
sys.exit(1)
sshpass = None
sudopass = None
......@@ -82,84 +79,46 @@ class Cli(object):
sshpass = getpass.getpass(prompt="SSH password: ")
if options.ask_sudo_pass:
sudopass = getpass.getpass(prompt="sudo password: ")
options.sudo = True
if options.sudo_user:
options.sudo = True
options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER
if options.tree:
utils.prepare_writeable_dir(options.tree)
if options.seconds:
print "background launch...\n\n"
runner = ansible.runner.Runner(
runner = Runner(
module_name=options.module_name, module_path=options.module_path,
module_args=options.module_args,
remote_user=options.remote_user, remote_pass=sshpass,
inventory=inventory_manager, timeout=options.timeout,
private_key_file=options.private_key_file,
forks=options.forks,
background=options.seconds, pattern=pattern,
pattern=pattern,
callbacks=self.callbacks, sudo=options.sudo,
sudo_pass=sudopass,
sudo_pass=sudopass,sudo_user=options.sudo_user,
transport=options.connection, debug=options.debug
)
return (runner, runner.run())
# ----------------------------------------------
def get_polling_runner(self, old_runner, jid):
return ansible.runner.Runner(
module_name='async_status', module_path=old_runner.module_path,
module_args="jid=%s" % jid, remote_user=old_runner.remote_user,
remote_pass=old_runner.remote_pass, inventory=old_runner.inventory,
timeout=old_runner.timeout, forks=old_runner.forks,
pattern='*', callbacks=self.silent_callbacks,
)
# ----------------------------------------------
if options.seconds:
print "background launch...\n\n"
results, poller = runner.runAsync(options.seconds)
results = self.poll_while_needed(poller, options)
else:
results = runner.run()
def hosts_to_poll(self, results):
hosts = []
for (host, res) in results['contacted'].iteritems():
if res.get('started',False):
hosts.append(host)
return hosts
return (runner, results)
# ----------------------------------------------
def poll_if_needed(self, runner, results, options, args):
def poll_while_needed(self, poller, options):
''' summarize results from Runner '''
if results is None:
exit("No hosts matched")
# BACKGROUND POLL LOGIC when -B and -P are specified
# FIXME: refactor
if options.seconds and options.poll_interval > 0:
poll_hosts = results['contacted'].keys()
if len(poll_hosts) == 0:
exit("no jobs were launched successfully")
ahost = poll_hosts[0]
jid = results['contacted'][ahost].get('ansible_job_id', None)
if jid is None:
exit("unexpected error: unable to determine jid")
clock = options.seconds
while (clock >= 0):
runner.inventory.restrict_to(poll_hosts)
polling_runner = self.get_polling_runner(runner, jid)
poll_results = polling_runner.run()
runner.inventory.lift_restriction()
if poll_results is None:
break
for (host, host_result) in poll_results['contacted'].iteritems():
# override last result with current status result for report
results['contacted'][host] = host_result
print utils.async_poll_status(jid, host, clock, host_result)
for (host, host_result) in poll_results['dark'].iteritems():
print "FAILED: %s => %s" % (host, host_result)
clock = clock - options.poll_interval
time.sleep(options.poll_interval)
poll_hosts = self.hosts_to_poll(poll_results)
if len(poll_hosts)==0:
break
poller.wait(options.seconds, options.poll_interval)
return poller.results
########################################################
......@@ -172,6 +131,4 @@ if __name__ == '__main__':
# Generic handler for ansible specific errors
print "ERROR: %s" % str(e)
sys.exit(1)
else:
cli.poll_if_needed(runner, results, options, args)
......@@ -33,8 +33,8 @@ def main(args):
# create parser for CLI options
usage = "%prog playbook.yml"
parser = utils.base_parser(constants=C, usage=usage, connect_opts=True, runas_opts=True)
parser.add_option('-O', '--override-hosts', dest="override_hosts", default=None,
help="run playbook against these hosts regardless of inventory settings")
parser.add_option('-e', '--extra-vars', dest="extra_vars", default=None,
help="set additional key=value variables from the CLI")
options, args = parser.parse_args(args)
......@@ -48,9 +48,11 @@ def main(args):
sshpass = getpass.getpass(prompt="SSH password: ")
if options.ask_sudo_pass:
sudopass = getpass.getpass(prompt="sudo password: ")
override_hosts = None
if options.override_hosts:
override_hosts = options.override_hosts.split(",")
options.sudo = True
if options.sudo_user:
options.sudo = True
options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER
extra_vars = utils.parse_kv(options.extra_vars)
# run all playbooks specified on the command line
for playbook in args:
......@@ -63,7 +65,6 @@ def main(args):
playbook=playbook,
module_path=options.module_path,
host_list=options.inventory,
override_hosts=override_hosts,
forks=options.forks,
debug=options.debug,
remote_user=options.remote_user,
......@@ -74,7 +75,10 @@ def main(args):
timeout=options.timeout,
transport=options.connection,
sudo=options.sudo,
sudo_pass=sudopass
sudo_user=options.sudo_user,
sudo_pass=sudopass,
extra_vars=extra_vars,
private_key_file=options.private_key_file
)
try:
......
#!/usr/bin/env python
# ansible-pull is a script that runs ansible in local mode
# after checking out a playbooks directory from git. There is an
# example playbook to bootstrap this script in the examples/ dir which
# installs ansible and sets it up to run on cron.
#
# usage:
# ansible-pull -d /var/ansible/local -U http://wherever/content.git -C production
#
# the git repo must contain a playbook named 'local.yml'
# (c) 2012, Stephen Fromm <sfromm@gmail.com>
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import os
import subprocess
import sys
from optparse import OptionParser
DEFAULT_PLAYBOOK = 'local.yml'
def _run(cmd):
cmd = subprocess.Popen(cmd, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
print out
if cmd.returncode != 0:
print err
return cmd.returncode
def main(args):
""" Set up and run a local playbook """
usage = "%prog [options]"
parser = OptionParser()
parser.add_option('-d', '--directory', dest='dest', default=None,
help='Directory to checkout git repository')
parser.add_option('-U', '--url', dest='url',
default=None,
help='URL of git repository')
parser.add_option('-C', '--checkout', dest='checkout',
default="HEAD",
help='Branch/Tag/Commit to checkout. Defaults to HEAD.')
options, args = parser.parse_args(args)
git_opts = "repo=%s dest=%s version=%s" % (options.url, options.dest, options.checkout)
cmd = 'ansible all -c local -m git -a "%s"' % git_opts
print "cmd=%s" % cmd
rc = _run(cmd)
if rc != 0:
return rc
os.chdir(options.dest)
cmd = 'ansible-playbook -c local %s' % DEFAULT_PLAYBOOK
rc = _run(cmd)
return rc
if __name__ == '__main__':
try:
sys.exit(main(sys.argv[1:]))
except KeyboardInterrupt, e:
print >>sys.stderr, "Exit on user request.\n"
sys.exit(1)
'\" t
.\" Title: ansible-playbook
.\" Author: [see the "AUTHOR" section]
.\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
.\" Date: 05/01/2012
.\" Generator: DocBook XSL Stylesheets v1.75.2 <http://docbook.sf.net/>
.\" Date: 05/23/2012
.\" Manual: System administration commands
.\" Source: Ansible 0.3.1
.\" Source: Ansible 0.4
.\" Language: English
.\"
.TH "ANSIBLE\-PLAYBOOK" "1" "05/01/2012" "Ansible 0\&.3\&.1" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.\" http://bugs.debian.org/507673
.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH "ANSIBLE\-PLAYBOOK" "1" "05/23/2012" "Ansible 0\&.4" "System administration commands"
.\" -----------------------------------------------------------------
.\" * set default formatting
.\" -----------------------------------------------------------------
......@@ -45,7 +36,7 @@ The names of one or more YAML format files to run as ansible playbooks\&.
.PP
\fB\-D\fR, \fB\-\-debug\fR
.RS 4
Print any messages the remote module sends to standard error to the console
Debug mode
.RE
.PP
\fB\-i\fR \fIPATH\fR, \fB\-\-inventory=\fR\fIPATH\fR
......@@ -64,6 +55,11 @@ to load modules from\&. The default is
\fI/usr/share/ansible\fR\&.
.RE
.PP
\fB\-e\fR \fIVARS\fR, \fB\-\-extra\-vars=\fR\fIVARS\fR
.RS 4
Extra variables to inject into a playbook, in key=value key=value format\&.
.RE
.PP
\fB\-f\fR \fINUM\fR, \fB\-\-forks=\fR\fINUM\fR
.RS 4
Level of parallelism\&.
......@@ -87,13 +83,6 @@ Connection timeout to use when trying to talk to hosts, in
\fISECONDS\fR\&.
.RE
.PP
\fB\-O\fR \fIOVERRIDE_HOSTS\fR, \fB\-\-override\-hosts=\fR\fIOVERRIDE_HOSTS\fR
.RS 4
Ignore the inventory file and run the playbook against only these hosts\&. "hosts:" line in playbook should be set to
\fIall\fR
when using this option\&.
.RE
.PP
\fB\-s\fR, \fB\-\-sudo\fR
.RS 4
Force all plays to use sudo, even if not marked as such\&.
......
......@@ -36,8 +36,7 @@ OPTIONS
*-D*, *--debug*::
Print any messages the remote module sends to standard error to the console
Debug mode
*-i* 'PATH', *--inventory=*'PATH'::
......@@ -48,6 +47,9 @@ The 'PATH' to the inventory hosts file, which defaults to '/etc/ansible/hosts'.
The 'DIRECTORY' to load modules from. The default is '/usr/share/ansible'.
*-e* 'VARS', *--extra-vars=*'VARS'::
Extra variables to inject into a playbook, in key=value key=value format.
*-f* 'NUM', *--forks=*'NUM'::
......@@ -69,12 +71,6 @@ Prompt for the password to use for playbook plays that request sudo access, if a
Connection timeout to use when trying to talk to hosts, in 'SECONDS'.
*-O* 'OVERRIDE_HOSTS', *--override-hosts=*'OVERRIDE_HOSTS'::
Ignore the inventory file and run the playbook against only these hosts. "hosts:" line
in playbook should be set to 'all' when using this option.
*-s*, *--sudo*::
Force all plays to use sudo, even if not marked as such.
......
'\" t
.\" Title: ansible
.\" Author: [see the "AUTHOR" section]
.\" Generator: DocBook XSL Stylesheets v1.76.1 <http://docbook.sf.net/>
.\" Date: 05/01/2012
.\" Generator: DocBook XSL Stylesheets v1.75.2 <http://docbook.sf.net/>
.\" Date: 05/23/2012
.\" Manual: System administration commands
.\" Source: Ansible 0.3.1
.\" Source: Ansible 0.4
.\" Language: English
.\"
.TH "ANSIBLE" "1" "05/01/2012" "Ansible 0\&.3\&.1" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.\" http://bugs.debian.org/507673
.\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
.\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.TH "ANSIBLE" "1" "05/23/2012" "Ansible 0\&.4" "System administration commands"
.\" -----------------------------------------------------------------
.\" * set default formatting
.\" -----------------------------------------------------------------
......@@ -34,7 +25,7 @@ ansible \- run a command somewhere else
ansible <host\-pattern> [\-f forks] [\-m module_name] [\-a args]
.SH "DESCRIPTION"
.sp
\fBAnsible\fR is an extra\-simple tool/framework/API for doing \*(Aqremote things\*(Aq over SSH\&.
\fBAnsible\fR is an extra\-simple tool/framework/API for doing \'remote things\' over SSH\&.
.SH "ARGUMENTS"
.PP
\fBhost\-pattern\fR
......@@ -72,7 +63,7 @@ to load modules from\&. The default is
\fI/usr/share/ansible\fR\&.
.RE
.PP
\fB\-a\fR \*(Aq\fIARGUMENTS\fR\*(Aq, \fB\-\-args=\fR\*(Aq\fIARGUMENTS\fR\*(Aq
\fB\-a\fR \'\fIARGUMENTS\fR\', \fB\-\-args=\fR\'\fIARGUMENTS\fR\'
.RS 4
The
\fIARGUMENTS\fR
......@@ -81,7 +72,7 @@ to pass to the module\&.
.PP
\fB\-D\fR, \fB\-\-debug\fR
.RS 4
Print any messages the remote module sends to standard error to the console
Debug mode
.RE
.PP
\fB\-k\fR, \fB\-\-ask\-pass\fR
......@@ -138,6 +129,13 @@ Use this remote
instead of root\&.
.RE
.PP
\fB\-U\fR \fISUDO_USERNAME\fR, \fB\-\-sudo\-user=\fR\fISUDO_USERNAME\fR
.RS 4
Sudo to
\fISUDO_USERNAME\fR
instead of root\&. Implies \-\-sudo\&.
.RE
.PP
\fB\-c\fR \fICONNECTION\fR, \fB\-\-connection=\fR\fICONNECTION\fR
.RS 4
Connection type to use\&. Possible options are
......
......@@ -62,7 +62,7 @@ The 'ARGUMENTS' to pass to the module.
*-D*, *--debug*::
Print any messages the remote module sends to standard error to the console
Debug mode
*-k*, *--ask-pass*::
......@@ -101,6 +101,10 @@ Poll a background job every 'NUM' seconds. Requires *-B*.
Use this remote 'USERNAME' instead of root.
*-U* 'SUDO_USERNAME', *--sudo-user=*'SUDO_USERNAME'::
Sudo to 'SUDO_USERNAME' instead of root. Implies --sudo.
*-c* 'CONNECTION', *--connection=*'CONNECTION'::
Connection type to use. Possible options are 'paramiko' (SSH) and 'local'.
......
# ansibple-pull setup
#
# on remote hosts, set up ansible to run periodically using the latest code
# from a particular checkout, in pull based fashion, inverting Ansible's
# usual push-based operating mode.
#
# This particular pull based mode is ideal for:
#
# (A) massive scale out
# (B) continual system remediation
#
# DO NOT RUN THIS AGAINST YOUR HOSTS WITHOUT CHANGING THE repo_url
# TO SOMETHING YOU HAVE PERSONALLY VERIFIED
#
#
---
- hosts: pull_mode_hosts
user: root
vars:
# schdule is fed directly to cron
schedule: '*/15 * * * *'
# User to run ansible-pull as from cron
cron_user: root
# Directory to where repository will be cloned
workdir: /var/lib/ansible/local
# Repository to check out -- YOU MUST CHANGE THIS
# repo must contain a local.yml file at top level
#repo_url: git://github.com/sfromm/ansible-playbooks.git
repo_url: SUPPLY_YOUR_OWN_GIT_URL_HERE
tasks:
- name: Install ansible
action: yum pkg=ansible state=installed
- name: Create local directory to work from
action: file path=$workdir state=directory owner=root group=root mode=0751
- name: Copy ansible inventory file to client
action: copy src=/etc/ansible/hosts dest=/etc/ansible/hosts
owner=root group=root mode=0644
- name: Create crontab entry to clone/pull git repository
action: template src=templates/ansible-pull.j2 dest=/etc/cron.d/ansible-pull owner=root group=root mode=0644
......@@ -3,6 +3,7 @@
- hosts: all
user: root
sudo: True
tasks:
......
......@@ -34,7 +34,7 @@
# we could also have done something like:
# - include: wordpress.yml user=timmy
# and had access to the template variable {{ user }} in the
# and had access to the template variable $user in the
# included file, if we wanted to. Variables from vars
# and vars_files are also available inside include files
......
......@@ -7,7 +7,8 @@
# on all hosts, run as the user root...
- hosts: all
- name: example play
hosts: all
user: root
# could have also have done:
......
---
# this is an example of how to template a file over using some variables derived
# from the system. For instance, if you wanted to have different configuration
# templates by OS version, this is a neat way to do it. Any Ansible facts, facter facts,
# or ohai facts could be used to do this.
- hosts: all
tasks:
- name: template a config file
action: template dest=/etc/imaginary_file.conf
first_available_file:
# first see if we have a file for this specific host
- /srv/whatever/${ansible_hostname}.conf
# next try to load something like CentOS6.2.conf
- /srv/whatever/${ansible_distribution}${ansible_distribution_version}.conf
# next see if there's a CentOS.conf
- /srv/whatever/${ansible_distribution}.conf
# finally give up and just use something generic
- /srv/whatever/default
# Cron job to git clone/pull a repo and then run locally
{{ schedule }} {{ cron_user }} ansible-pull -d {{ workdir }} -U {{ repo_url }} >/var/log/ansible-pull.log 2>&1
......@@ -49,7 +49,7 @@ if len(sys.argv) > 1:
else:
args = ""
argspath = os.path.expanduser("/.ansible_test_module_arguments")
argspath = os.path.expanduser("~/.ansible_test_module_arguments")
argsfile = open(argspath, 'w')
argsfile.write(args)
argsfile.close()
......@@ -63,8 +63,9 @@ cmd = subprocess.Popen("%s %s" % (modfile, argspath),
if err and err != '':
print "***********************************"
print "RECIEVED DATA ON STDOUT, WILL IGNORE THIS:"
print "RECIEVED DATA ON STDERR, THIS WILL CRASH YOUR MODULE"
print err
sys.exit(1)
try:
print "***********************************"
......@@ -87,3 +88,4 @@ print utils.bigjson(results)
sys.exit(0)
......@@ -14,5 +14,4 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
__version__ = '0.3.1'
__author__ = 'Michael DeHaan'
__version__ = '0.4'
......@@ -50,7 +50,7 @@ class AggregateStats(object):
elif 'skipped' in value and bool(value['skipped']):
self._increment('skipped', host)
elif 'changed' in value and bool(value['changed']):
if not setup:
if not setup and not poll:
self._increment('changed', host)
self._increment('ok', host)
else:
......@@ -98,6 +98,15 @@ class DefaultRunnerCallbacks(object):
def on_no_hosts(self):
pass
def on_async_poll(self, host, res, jid, clock):
pass
def on_async_ok(self, host, res, jid):
pass
def on_async_failed(self, host, res, jid):
pass
########################################################################
class CliRunnerCallbacks(DefaultRunnerCallbacks):
......@@ -106,12 +115,17 @@ class CliRunnerCallbacks(DefaultRunnerCallbacks):
def __init__(self):
# set by /usr/bin/ansible later
self.options = None
self._async_notified = {}
def on_failed(self, host, res):
self._on_any(host,res)
invocation = res.get('invocation','')
if not invocation.startswith('async_status'):
self._on_any(host,res)
def on_ok(self, host, res):
self._on_any(host,res)
invocation = res.get('invocation','')
if not invocation.startswith('async_status'):
self._on_any(host,res)
def on_unreachable(self, host, res):
print "%s | FAILED => %s" % (host, res)
......@@ -122,11 +136,24 @@ class CliRunnerCallbacks(DefaultRunnerCallbacks):
pass
def on_error(self, host, err):
print >>sys.stderr, "stderr: [%s] => %s\n" % (host, err)
print >>sys.stderr, "err: [%s] => %s\n" % (host, err)
def on_no_hosts(self):
print >>sys.stderr, "no hosts matched\n"
def on_async_poll(self, host, res, jid, clock):
if jid not in self._async_notified:
self._async_notified[jid] = clock + 1
if self._async_notified[jid] > clock:
self._async_notified[jid] = clock
print "<job %s> polling, %ss remaining"%(jid, clock)
def on_async_ok(self, host, res, jid):
print "<job %s> finished on %s => %s"%(jid, host, utils.bigjson(res))
def on_async_failed(self, host, res, jid):
print "<job %s> FAILED on %s => %s"%(jid, host, utils.bigjson(res))
def _on_any(self, host, result):
print utils.host_report_msg(host, self.options.module_name, result, self.options.one_line)
if self.options.tree:
......@@ -139,9 +166,10 @@ class PlaybookRunnerCallbacks(DefaultRunnerCallbacks):
def __init__(self, stats):
self.stats = stats
self._async_notified = {}
def on_unreachable(self, host, msg):
print "unreachable: [%s] => %s" % (host, msg)
print "fatal: [%s] => %s" % (host, msg)
def on_failed(self, host, results):
invocation = results.get('invocation',None)
......@@ -160,7 +188,7 @@ class PlaybookRunnerCallbacks(DefaultRunnerCallbacks):
print "ok: [%s] => %s\n" % (host, invocation)
def on_error(self, host, err):
print >>sys.stderr, "stderr: [%s] => %s\n" % (host, err)
print >>sys.stderr, "err: [%s] => %s\n" % (host, err)
def on_skipped(self, host):
print "skipping: [%s]\n" % host
......@@ -168,6 +196,19 @@ class PlaybookRunnerCallbacks(DefaultRunnerCallbacks):
def on_no_hosts(self):
print "no hosts matched or remaining\n"
def on_async_poll(self, host, res, jid, clock):
if jid not in self._async_notified:
self._async_notified[jid] = clock + 1
if self._async_notified[jid] > clock:
self._async_notified[jid] = clock
print "<job %s> polling, %ss remaining"%(jid, clock)
def on_async_ok(self, host, res, jid):
print "<job %s> finished on %s"%(jid, host)
def on_async_failed(self, host, res, jid):
print "<job %s> FAILED on %s"%(jid, host)
########################################################################
class PlaybookCallbacks(object):
......@@ -205,10 +246,3 @@ class PlaybookCallbacks(object):
def on_play_start(self, pattern):
print "PLAY [%s] ****************************\n" % pattern
def on_async_confused(self, msg):
print msg
def on_async_poll(self, jid, host, clock, host_result):
print utils.async_poll_status(jid, host, clock, host_result)
......@@ -19,20 +19,23 @@
################################################
import warnings
# prevent paramiko warning noise
# see http://stackoverflow.com/questions/3920502/
with warnings.catch_warnings():
warnings.simplefilter("ignore")
import paramiko
import traceback
import os
import time
import random
import re
import shutil
import subprocess
import pipes
import socket
import random
from ansible import errors
# prevent paramiko warning noise
# see http://stackoverflow.com/questions/3920502/
with warnings.catch_warnings():
warnings.simplefilter("ignore")
import paramiko
################################################
......@@ -41,14 +44,14 @@ class Connection(object):
_LOCALHOSTRE = re.compile(r"^(127.0.0.1|localhost|%s)$" % os.uname()[1])
def __init__(self, runner, transport):
def __init__(self, runner, transport,sudo_user):
self.runner = runner
self.transport = transport
self.sudo_user = sudo_user
def connect(self, host, port=None):
conn = None
if self.transport == 'local' and self._LOCALHOSTRE.search(host):
conn = LocalConnection(self.runner, host, None)
conn = LocalConnection(self.runner, host)
elif self.transport == 'paramiko':
conn = ParamikoConnection(self.runner, host, port)
if conn is None:
......@@ -73,17 +76,20 @@ class ParamikoConnection(object):
self.port = self.runner.remote_port
def _get_conn(self):
user = self.runner.remote_user
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(
self.host,
username=self.runner.remote_user,
allow_agent=True,
look_for_keys=True,
self.host,
username=user,
allow_agent=True,
look_for_keys=True,
key_filename=self.runner.private_key_file,
password=self.runner.remote_pass,
timeout=self.runner.timeout,
timeout=self.runner.timeout,
port=self.port
)
except Exception, e:
......@@ -94,65 +100,52 @@ class ParamikoConnection(object):
return ssh
def connect(self):
''' connect to the remote host '''
self.ssh = self._get_conn()
return self
def exec_command(self, cmd, tmp_path, sudoable=False):
def exec_command(self, cmd, tmp_path,sudo_user,sudoable=False):
''' run a command on the remote host '''
if not self.runner.sudo or not sudoable:
stdin, stdout, stderr = self.ssh.exec_command(cmd)
return (stdin, stdout, stderr)
bufsize = 4096
chan = self.ssh.get_transport().open_session()
chan.get_pty()
if not self.runner.sudo or not sudoable:
quoted_command = '"$SHELL" -c ' + pipes.quote(cmd)
chan.exec_command(quoted_command)
else:
# percalculated tmp_path is ONLY required for sudo usage
if tmp_path is None:
raise Exception("expecting tmp_path")
r = random.randint(0,99999)
# invoke command using a new connection over sudo
result_file = os.path.join(tmp_path, "sudo_result.%s" % r)
self.ssh.close()
ssh_sudo = self._get_conn()
sudo_chan = ssh_sudo.invoke_shell()
sudo_chan.send("sudo -s\n")
# FIXME: using sudo with a password adds more delay, someone may wish
# to optimize to see when the channel is actually ready
if self.runner.sudo_pass:
time.sleep(0.1) # this is conservative
sudo_chan.send("%s\n" % self.runner.sudo_pass)
time.sleep(0.1)
# to avoid ssh expect logic, redirect output to file and move the
# file when we are done with it...
sudo_chan.send("(%s >%s_pre 2>/dev/null ; mv %s_pre %s) &\n" % (cmd, result_file, result_file, result_file))
# FIXME: someone may wish to optimize to not background the launch, and tell when the command
# returns, removing the time.sleep(1) here
time.sleep(1)
sudo_chan.close()
self.ssh = self._get_conn()
# now load the results of the JSON execution...
# FIXME: really need some timeout logic here
# though it doesn't make since to use the SSH timeout or impose any particular
# limit. Upgrades welcome.
sftp = self.ssh.open_sftp()
while True:
# print "waiting on %s" % result_file
time.sleep(1)
try:
sftp.stat(result_file)
break
except IOError:
pass
sftp.close()
# TODO: see if there's a SFTP way to just get the file contents w/o saving
# to disk vs this hack...
stdin, stdout, stderr = self.ssh.exec_command("cat %s" % result_file)
return (stdin, stdout, stderr)
# Rather than detect if sudo wants a password this time, -k makes
# sudo always ask for a password if one is required. The "--"
# tells sudo that this is the end of sudo options and the command
# follows. Passing a quoted compound command to sudo (or sudo -s)
# directly doesn't work, so we shellquote it with pipes.quote()
# and pass the quoted string to the user's shell. We loop reading
# output until we see the randomly-generated sudo prompt set with
# the -p option.
randbits = ''.join(chr(random.randint(ord('a'), ord('z'))) for x in xrange(32))
prompt = '[sudo via ansible, key=%s] password: ' % randbits
sudocmd = 'sudo -k -p "%s" -u %s -- "$SHELL" -c %s' % (prompt,
sudo_user, pipes.quote(cmd))
sudo_output = ''
try:
chan.exec_command(sudocmd)
if self.runner.sudo_pass:
while not sudo_output.endswith(prompt):
chunk = chan.recv(bufsize)
if not chunk:
raise errors.AnsibleError('ssh connection closed waiting for sudo password prompt')
sudo_output += chunk
chan.sendall(self.runner.sudo_pass + '\n')
except socket.timeout:
raise errors.AnsibleError('ssh timed out waiting for sudo.\n' + sudo_output)
stdin = chan.makefile('wb', bufsize)
stdout = chan.makefile('rb', bufsize)
stderr = '' # stderr goes to stdout when using a pty, so this will never output anything.
return stdin, stdout, stderr
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
......@@ -195,7 +188,7 @@ class LocalConnection(object):
return self
def exec_command(self, cmd, tmp_path, sudoable=False):
def exec_command(self, cmd, tmp_path,sudo_user,sudoable=False):
''' run a command on the local host '''
if self.runner.sudo and sudoable:
cmd = "sudo -s %s" % cmd
......@@ -231,4 +224,3 @@ class LocalConnection(object):
''' terminate the connection; nothing to do here '''
pass
......@@ -23,6 +23,8 @@ DEFAULT_HOST_LIST = os.environ.get('ANSIBLE_HOSTS',
'/etc/ansible/hosts')
DEFAULT_MODULE_PATH = os.environ.get('ANSIBLE_LIBRARY',
'/usr/share/ansible')
DEFAULT_REMOTE_TMP = os.environ.get('ANSIBLE_REMOTE_TMP',
'/$HOME/.ansible/tmp')
DEFAULT_MODULE_NAME = 'command'
DEFAULT_PATTERN = '*'
......@@ -32,7 +34,9 @@ DEFAULT_TIMEOUT = 10
DEFAULT_POLL_INTERVAL = 15
DEFAULT_REMOTE_USER = 'root'
DEFAULT_REMOTE_PASS = None
DEFAULT_PRIVATE_KEY_FILE = None
DEFAULT_SUDO_PASS = None
DEFAULT_SUDO_USER = 'root'
DEFAULT_REMOTE_PORT = 22
DEFAULT_TRANSPORT = 'paramiko'
DEFAULT_TRANSPORT_OPTS = ['local', 'paramiko']
......
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
# from ansible import errors
class Group(object):
"""
Group of ansible hosts
"""
def __init__(self, name=None):
self.name = name
self.hosts = []
self.vars = {}
self.child_groups = []
self.parent_groups = []
if self.name is None:
raise Exception("group name is required")
def add_child_group(self, group):
if self == group:
raise Exception("can't add group to itself")
self.child_groups.append(group)
group.parent_groups.append(self)
def add_host(self, host):
self.hosts.append(host)
host.add_group(self)
def set_variable(self, key, value):
self.vars[key] = value
def get_hosts(self):
hosts = []
for kid in self.child_groups:
hosts.extend(kid.get_hosts())
hosts.extend(self.hosts)
return hosts
def get_variables(self):
vars = {}
# FIXME: verify this variable override order is what we want
for ancestor in self.get_ancestors():
vars.update(ancestor.get_variables())
vars.update(self.vars)
return vars
def _get_ancestors(self):
results = {}
for g in self.parent_groups:
results[g.name] = g
results.update(g._get_ancestors())
return results
def get_ancestors(self):
return self._get_ancestors().values()
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
from ansible import errors
import ansible.constants as C
class Host(object):
"""
Group of ansible hosts
"""
def __init__(self, name=None, port=None):
self.name = name
self.vars = {}
self.groups = []
if port and port != C.DEFAULT_REMOTE_PORT:
self.set_variable('ansible_ssh_port', int(port))
if self.name is None:
raise Exception("host name is required")
def add_group(self, group):
self.groups.append(group)
def set_variable(self, key, value):
self.vars[key]=value;
def get_groups(self):
groups = {}
for g in self.groups:
groups[g.name] = g
ancestors = g.get_ancestors()
for a in ancestors:
groups[a.name] = a
return groups.values()
def get_variables(self):
results = {}
for group in self.groups:
results.update(group.get_variables())
results.update(self.vars)
results['inventory_hostname'] = self.name
groups = self.get_groups()
results['group_names'] = sorted([ g.name for g in groups if g.name != 'all'])
return results
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
import fnmatch
import os
import subprocess
import constants as C
from ansible.host import Host
from ansible.group import Group
from ansible import errors
from ansible import utils
class InventoryParser(object):
"""
Host inventory for ansible.
"""
def __init__(self, filename=C.DEFAULT_HOST_LIST):
fh = open(filename)
self.lines = fh.readlines()
self.groups = {}
self.hosts = {}
self._parse()
def _parse(self):
self._parse_base_groups()
self._parse_group_children()
self._parse_group_variables()
return self.groups
# [webservers]
# alpha
# beta:2345
# gamma sudo=True user=root
# delta asdf=jkl favcolor=red
def _parse_base_groups(self):
ungrouped = Group(name='ungrouped')
all = Group(name='all')
all.add_child_group(ungrouped)
self.groups = dict(all=all, ungrouped=ungrouped)
active_group_name = 'ungrouped'
for line in self.lines:
if line.startswith("["):
active_group_name = line.replace("[","").replace("]","").strip()
if line.find(":vars") != -1 or line.find(":children") != -1:
active_group_name = None
else:
new_group = self.groups[active_group_name] = Group(name=active_group_name)
all.add_child_group(new_group)
elif line.startswith("#") or line == '':
pass
elif active_group_name:
tokens = line.split()
if len(tokens) == 0:
continue
hostname = tokens[0]
port = C.DEFAULT_REMOTE_PORT
if hostname.find(":") != -1:
tokens2 = hostname.split(":")
hostname = tokens2[0]
port = tokens2[1]
host = None
if hostname in self.hosts:
host = self.hosts[hostname]
else:
host = Host(name=hostname, port=port)
self.hosts[hostname] = host
if len(tokens) > 1:
for t in tokens[1:]:
(k,v) = t.split("=")
host.set_variable(k,v)
self.groups[active_group_name].add_host(host)
# [southeast:children]
# atlanta
# raleigh
def _parse_group_children(self):
group = None
for line in self.lines:
line = line.strip()
if line is None or line == '':
continue
if line.startswith("[") and line.find(":children]") != -1:
line = line.replace("[","").replace(":children]","")
group = self.groups.get(line, None)
if group is None:
group = self.groups[line] = Group(name=line)
elif line.startswith("#"):
pass
elif line.startswith("["):
group = None
elif group:
kid_group = self.groups.get(line, None)
if kid_group is None:
raise errors.AnsibleError("child group is not defined: (%s)" % line)
else:
group.add_child_group(kid_group)
# [webservers:vars]
# http_port=1234
# maxRequestsPerChild=200
def _parse_group_variables(self):
group = None
for line in self.lines:
line = line.strip()
if line.startswith("[") and line.find(":vars]") != -1:
line = line.replace("[","").replace(":vars]","")
group = self.groups.get(line, None)
if group is None:
raise errors.AnsibleError("can't add vars to undefined group: %s" % line)
elif line.startswith("#"):
pass
elif line.startswith("["):
group = None
elif line == '':
pass
elif group:
if line.find("=") == -1:
raise errors.AnsibleError("variables assigned to group must be in key=value form")
else:
(k,v) = line.split("=")
group.set_variable(k,v)
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
import constants as C
from ansible.host import Host
from ansible.group import Group
from ansible import errors
from ansible import utils
class InventoryParserYaml(object):
"""
Host inventory for ansible.
"""
def __init__(self, filename=C.DEFAULT_HOST_LIST):
fh = open(filename)
data = fh.read()
fh.close()
self._hosts = {}
self._parse(data)
def _make_host(self, hostname):
if hostname in self._hosts:
return self._hosts[hostname]
else:
host = Host(hostname)
self._hosts[hostname] = host
return host
# see file 'test/yaml_hosts' for syntax
def _parse(self, data):
all = Group('all')
ungrouped = Group('ungrouped')
all.add_child_group(ungrouped)
self.groups = dict(all=all, ungrouped=ungrouped)
yaml = utils.parse_yaml(data)
for item in yaml:
if type(item) in [ str, unicode ]:
host = self._make_host(item)
ungrouped.add_host(host)
elif type(item) == dict and 'host' in item:
host = self._make_host(item['host'])
vars = item.get('vars', {})
if type(vars)==list:
varlist, vars = vars, {}
for subitem in varlist:
vars.update(subitem)
for (k,v) in vars.items():
host.set_variable(k,v)
elif type(item) == dict and 'group' in item:
group = Group(item['group'])
for subresult in item.get('hosts',[]):
if type(subresult) in [ str, unicode ]:
host = self._make_host(subresult)
group.add_host(host)
elif type(subresult) == dict:
host = self._make_host(subresult['host'])
vars = subresult.get('vars',{})
if type(vars) == list:
for subitem in vars:
for (k,v) in subitem.items():
host.set_variable(k,v)
elif type(vars) == dict:
for (k,v) in subresult.get('vars',{}).items():
host.set_variable(k,v)
else:
raise errors.AnsibleError("unexpected type for variable")
group.add_host(host)
vars = item.get('vars',{})
if type(vars) == dict:
for (k,v) in item.get('vars',{}).items():
group.set_variable(k,v)
elif type(vars) == list:
for subitem in vars:
if type(subitem) != dict:
raise errors.AnsibleError("expected a dictionary")
for (k,v) in subitem.items():
group.set_variable(k,v)
self.groups[group.name] = group
all.add_child_group(group)
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
import os
import subprocess
import constants as C
import os
from ansible.host import Host
from ansible.group import Group
from ansible import errors
from ansible import utils
class InventoryScript(object):
"""
Host inventory parser for ansible using external inventory scripts.
"""
def __init__(self, filename=C.DEFAULT_HOST_LIST):
cmd = [ filename, "--list" ]
sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = sp.communicate()
self.data = stdout
self.groups = self._parse()
def _parse(self):
groups = {}
self.raw = utils.parse_json(self.data)
all=Group('all')
self.groups = dict(all=all)
group = None
for (group_name, hosts) in self.raw.items():
group = groups[group_name] = Group(group_name)
host = None
for hostname in hosts:
host = Host(hostname)
group.add_host(host)
# FIXME: hack shouldn't be needed
all.add_host(host)
all.add_child_group(group)
return groups
......@@ -21,6 +21,7 @@ import sys
import os
import shlex
import re
import codecs
import jinja2
import yaml
import optparse
......@@ -170,14 +171,6 @@ def path_dwim(basedir, given):
else:
return os.path.join(basedir, given)
def async_poll_status(jid, host, clock, result):
if 'finished' in result:
return "<job %s> finished on %s" % (jid, host)
elif 'failed' in result:
return "<job %s> FAILED on %s" % (jid, host)
else:
return "<job %s> polling on %s, %s remaining" % (jid, host, clock)
def json_loads(data):
return json.loads(data)
......@@ -206,7 +199,29 @@ def parse_json(data):
return { "failed" : True, "parsed" : False, "msg" : data }
return results
_KEYCRE = re.compile(r"\$(\w+)")
_LISTRE = re.compile(r"(\w+)\[(\d+)\]")
def varLookup(name, vars):
''' find the contents of a possibly complex variable in vars. '''
path = name.split('.')
space = vars
for part in path:
if part in space:
space = space[part]
elif "[" in part:
m = _LISTRE.search(part)
if not m:
return
try:
space = space[m.group(1)][int(m.group(2))]
except (KeyError, IndexError):
return
else:
return
return space
_KEYCRE = re.compile(r"\$(?P<complex>\{){0,1}((?(complex)[\w\.\[\]]+|\w+))(?(complex)\})")
# if { -> complex if complex, allow . and need trailing }
def varReplace(raw, vars):
'''Perform variable replacement of $vars
......@@ -228,8 +243,9 @@ def varReplace(raw, vars):
# Determine replacement value (if unknown variable then preserve
# original)
varname = m.group(1).lower()
replacement = str(vars.get(varname, m.group()))
varname = m.group(2)
replacement = unicode(varLookup(varname, vars) or m.group())
start, end = m.span()
done.append(raw[:start]) # Keep stuff leading up to token
......@@ -238,21 +254,29 @@ def varReplace(raw, vars):
return ''.join(done)
def template(text, vars, setup_cache):
def template(text, vars, setup_cache, no_engine=True):
''' run a text buffer through the templating engine '''
vars = vars.copy()
text = varReplace(str(text), vars)
vars['hostvars'] = setup_cache
template = jinja2.Template(text)
return template.render(vars)
text = varReplace(unicode(text), vars)
if no_engine:
# used when processing include: directives so that Jinja is evaluated
# in a later context when more variables are available
return text
else:
template = jinja2.Template(text)
res = template.render(vars)
if text.endswith('\n') and not res.endswith('\n'):
res = res + '\n'
return res
def double_template(text, vars, setup_cache):
return template(template(text, vars, setup_cache), vars, setup_cache)
def template_from_file(path, vars, setup_cache):
def template_from_file(path, vars, setup_cache, no_engine=True):
''' run a file through the templating engine '''
data = file(path).read()
return template(data, vars, setup_cache)
data = codecs.open(path, encoding="utf8").read()
return template(data, vars, setup_cache, no_engine=no_engine)
def parse_yaml(data):
return yaml.load(data)
......@@ -267,11 +291,12 @@ def parse_yaml_from_file(path):
def parse_kv(args):
''' convert a string of key/value items to a dict '''
options = {}
vargs = shlex.split(args, posix=True)
for x in vargs:
if x.find("=") != -1:
k, v = x.split("=")
options[k]=v
if args is not None:
vargs = shlex.split(args, posix=True)
for x in vargs:
if x.find("=") != -1:
k, v = x.split("=", 1)
options[k]=v
return options
class SortedOptParser(optparse.OptionParser):
......@@ -285,7 +310,7 @@ def base_parser(constants=C, usage="", output_opts=False, runas_opts=False, asyn
parser = SortedOptParser(usage)
parser.add_option('-D','--debug', default=False, action="store_true",
help='debug standard error output of remote modules')
help='debug mode')
parser.add_option('-f','--forks', dest='forks', default=constants.DEFAULT_FORKS, type='int',
help="specify number of parallel processes to use (default=%s)" % constants.DEFAULT_FORKS)
parser.add_option('-i', '--inventory-file', dest='inventory',
......@@ -293,6 +318,8 @@ def base_parser(constants=C, usage="", output_opts=False, runas_opts=False, asyn
default=constants.DEFAULT_HOST_LIST)
parser.add_option('-k', '--ask-pass', default=False, dest='ask_pass', action='store_true',
help='ask for SSH password')
parser.add_option('--private-key', default=None, dest='private_key_file',
help='use this file to authenticate the connection')
parser.add_option('-K', '--ask-sudo-pass', default=False, dest='ask_sudo_pass', action='store_true',
help='ask for sudo password')
parser.add_option('-M', '--module-path', dest='module_path',
......@@ -311,6 +338,8 @@ def base_parser(constants=C, usage="", output_opts=False, runas_opts=False, asyn
if runas_opts:
parser.add_option("-s", "--sudo", default=False, action="store_true",
dest='sudo', help="run operations with sudo (nopasswd)")
parser.add_option('-U', '--sudo-user', dest='sudo_user', help='desired sudo user (default=root)',
default=None) # Can't default to root because we need to detect when this option was given
parser.add_option('-u', '--user', default=constants.DEFAULT_REMOTE_USER,
dest='remote_user',
help='connect as this user (default=%s)' % constants.DEFAULT_REMOTE_USER)
......
......@@ -25,14 +25,12 @@ import os
import sys
import shlex
import subprocess
import syslog
import traceback
APT_PATH = "/usr/bin/apt-get"
APT = "DEBIAN_PRIORITY=critical %s" % APT_PATH
def debug(msg):
print >>sys.stderr, msg
def exit_json(rc=0, **kwargs):
print json.dumps(kwargs)
sys.exit(rc)
......@@ -84,7 +82,7 @@ def install(pkgspec, cache, upgrade=False, default_release=None):
name, version = package_split(pkgspec)
installed, upgradable = package_status(name, version, cache)
if not installed or (upgrade and upgradable):
cmd = "%s -q -y install '%s'" % (APT, pkgspec)
cmd = "%s --option Dpkg::Options::=--force-confold -q -y install '%s'" % (APT, pkgspec)
if default_release:
cmd += " -t '%s'" % (default_release,)
rc, out, err = run_apt(cmd)
......@@ -116,6 +114,8 @@ if not os.path.exists(APT_PATH):
argfile = sys.argv[1]
args = open(argfile, 'r').read()
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
if not len(items):
fail_json(msg='the module requires arguments -a')
......
......@@ -28,13 +28,18 @@ import subprocess
import sys
import datetime
import traceback
import syslog
# ===========================================
# FIXME: better error handling
argsfile = sys.argv[1]
items = shlex.split(file(argsfile).read())
args = open(argsfile, 'r').read()
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
params = {}
for x in items:
......
......@@ -30,6 +30,7 @@ import datetime
import traceback
import signal
import time
import syslog
def daemonize_self():
# daemonizing code: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66012
......@@ -76,6 +77,9 @@ wrapped_module = sys.argv[3]
argsfile = sys.argv[4]
cmd = "%s %s" % (wrapped_module, argsfile)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % " ".join(sys.argv[1:]))
# setup logging directory
logdir = os.path.expanduser("~/.ansible_async")
log_path = os.path.join(logdir, jid)
......@@ -128,52 +132,68 @@ def _run_command(wrapped_cmd, jid, log_path):
# immediately exit this process, leaving an orphaned process
# running which immediately forks a supervisory timing process
pid = os.fork()
if pid != 0:
# the parent indicates the job has started
# print "RETURNING SUCCESS IN PARENT"
print json.dumps({ "started" : 1, "ansible_job_id" : jid, "results_file" : log_path })
sys.stdout.flush()
# we need to not return immmediately such that the launched command has an attempt
# to initialize PRIOR to ansible trying to clean up the launch directory (and argsfile)
# this probably could be done with some IPC later. Modules should always read
# the argsfile at the very first start of their execution anyway
time.sleep(1)
sys.exit(0)
else:
# the kid manages the job
# WARNING: the following call may be total overkill
daemonize_self()
# we are now daemonized in this other fork but still
# want to create a supervisory process
#print "DAEMONIZED KID MAKING MORE KIDS"
sub_pid = os.fork()
if sub_pid == 0:
#print "RUNNING IN KID A"
_run_command(cmd, jid, log_path)
#print "KID A COMPLETE"
sys.stdout.flush()
sys.exit(0)
else:
#print "WATCHING IN KID B"
remaining = int(time_limit)
if os.path.exists("/proc/%s" % sub_pid):
#print "STILL RUNNING"
time.sleep(1)
remaining = remaining - 1
else:
#print "DONE IN KID B"
sys.stdout.flush()
sys.exit(0)
if remaining == 0:
#print "SLAYING IN KID B"
os.kill(sub_pid, signals.SIGKILL)
sys.stdout.flush()
sys.exit(1)
sys.stdout.flush()
sys.exit(0)
#import logging
#import logging.handlers
#logger = logging.getLogger("ansible_async")
#logger.setLevel(logging.WARNING)
#logger.addHandler( logging.handlers.SysLogHandler("/dev/log") )
def debug(msg):
#logger.warning(msg)
pass
try:
pid = os.fork()
if pid:
# Notify the overlord that the async process started
# we need to not return immmediately such that the launched command has an attempt
# to initialize PRIOR to ansible trying to clean up the launch directory (and argsfile)
# this probably could be done with some IPC later. Modules should always read
# the argsfile at the very first start of their execution anyway
time.sleep(1)
debug("Return async_wrapper task started.")
print json.dumps({ "started" : 1, "ansible_job_id" : jid, "results_file" : log_path })
sys.stdout.flush()
sys.exit(0)
else:
# The actual wrapper process
# Daemonize, so we keep on running
daemonize_self()
# we are now daemonized, create a supervisory process
debug("Starting module and watcher")
sub_pid = os.fork()
if sub_pid:
# the parent stops the process after the time limit
remaining = int(time_limit)
# set the child process group id to kill all children
os.setpgid(sub_pid, sub_pid)
debug("Start watching %s (%s)"%(sub_pid, remaining))
time.sleep(5)
while os.waitpid(sub_pid, os.WNOHANG) == (0, 0):
debug("%s still running (%s)"%(sub_pid, remaining))
time.sleep(5)
remaining = remaining - 5
if remaining == 0:
debug("Now killing %s"%(sub_pid))
os.killpg(sub_pid, signal.SIGKILL)
debug("Sent kill to group %s"%sub_pid)
time.sleep(1)
sys.exit(0)
debug("Done in kid B.")
os._exit(0)
else:
# the child process runs the actual module
debug("Start module (%s)"%os.getpid())
_run_command(cmd, jid, log_path)
debug("Module complete (%s)"%os.getpid())
sys.exit(0)
except Exception, err:
debug("error: %s"%(err))
raise err
......@@ -29,9 +29,12 @@ import datetime
import traceback
import shlex
import os
import syslog
argfile = sys.argv[1]
args = open(argfile, 'r').read()
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
shell = False
......
......@@ -21,6 +21,7 @@
import sys
import os
import shlex
import syslog
# ===========================================
# convert arguments of form a=b c=d
......@@ -32,7 +33,11 @@ if len(sys.argv) == 1:
argfile = sys.argv[1]
if not os.path.exists(argfile):
sys.exit(1)
items = shlex.split(open(argfile, 'r').read())
args = open(argfile, 'r').read()
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
params = {}
......@@ -43,9 +48,9 @@ for x in items:
src = params['src']
dest = params['dest']
if src:
src = os.path.expanduser(src)
src = os.path.expanduser(src)
if dest:
dest = os.path.expanduser(dest)
dest = os.path.expanduser(dest)
# raise an error if there is no src file
if not os.path.exists(src):
......
......@@ -22,4 +22,5 @@
# facter
# ruby-json
/usr/bin/facter --json
/usr/bin/logger -t ansible-facter Invoked as-is
/usr/bin/facter --json 2>/dev/null
......@@ -18,19 +18,31 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# I wanted to keep this simple at first, so for now this checks out
# from the MASTER branch of a repo at a particular SHA or
# from the given branch of a repo at a particular SHA or
# tag. Latest is not supported, you should not be doing
# that. Branch checkouts are not supported. Contribs
# welcome! -- MPD
# that. Contribs welcome! -- MPD
try:
import json
except ImportError:
import simplejson as json
import os
import re
import sys
import shlex
import subprocess
import syslog
# ===========================================
# Basic support methods
def exit_json(rc=0, **kwargs):
print json.dumps(kwargs)
sys.exit(rc)
def fail_json(**kwargs):
kwargs['failed'] = True
exit_json(rc=1, **kwargs)
# ===========================================
# convert arguments of form a=b c=d
......@@ -38,29 +50,19 @@ import subprocess
# FIXME: make more idiomatic
if len(sys.argv) == 1:
print json.dumps({
"failed" : True,
"msg" : "the command module requires arguments (-a)"
})
sys.exit(1)
fail_json(msg="the command module requires arguments (-a)")
argfile = sys.argv[1]
if not os.path.exists(argfile):
print json.dumps({
"failed" : True,
"msg" : "Argument file not found"
})
sys.exit(1)
fail_json(msg="Argument file not found")
args = open(argfile, 'r').read()
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
if not len(items):
print json.dumps({
"failed" : True,
"msg" : "the command module requires arguments (-a)"
})
sys.exit(1)
fail_json(msg="the command module requires arguments (-a)")
params = {}
for x in items:
......@@ -69,7 +71,8 @@ for x in items:
dest = params['dest']
repo = params['repo']
version = params['version']
branch = params.get('branch', 'master')
version = params.get('version', 'HEAD')
# ===========================================
......@@ -81,7 +84,7 @@ def get_version(dest):
sha = sha[0].split()[1]
return sha
def clone(repo, dest):
def clone(repo, dest, branch):
''' makes a new git repo if it does not already exist '''
try:
os.makedirs(dest)
......@@ -89,12 +92,68 @@ def clone(repo, dest):
pass
cmd = "git clone %s %s" % (repo, dest)
cmd = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
rc = cmd.returncode
if branch is None or rc != 0:
return (out, err)
os.chdir(dest)
cmd = "git checkout -b %s origin/%s" % (branch, branch)
cmd = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return cmd.communicate()
def pull(repo, dest):
def reset(dest):
'''
Resets the index and working tree to HEAD.
Discards any changes to tracked files in working
tree since that commit.
'''
os.chdir(dest)
cmd = "git reset --hard HEAD"
cmd = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
rc = cmd.returncode
return (rc, out, err)
def switchLocalBranch( branch ):
cmd = "git checkout %s" % branch
cmd = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return cmd.communicate()
def pull(repo, dest, branch):
''' updates repo from remote sources '''
os.chdir(dest)
cmd = "git pull -u origin"
cmd = "git branch -a"
cmd = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(gbranch_out, gbranch_err) = cmd.communicate()
try:
m = re.search( '^\* (\S+|\(no branch\))$', gbranch_out, flags=re.M )
cur_branch = m.group(1)
m = re.search( '\s+remotes/origin/HEAD -> origin/(\S+)', gbranch_out, flags=re.M )
default_branch = m.group(1)
except:
fail_json(msg="could not determine branch data - received: %s" % gbranch_out)
if branch is None:
if cur_branch != default_branch:
(out, err) = switchLocalBranch( default_branch )
cmd = "git pull -u origin"
elif branch == cur_branch:
cmd = "git pull -u origin"
else:
m = re.search( '^\s+%s$' % branch, gbranch_out, flags=re.M ) #see if we've already checked it out
if m is None:
cmd = "git checkout -b %s origin/%s" % (branch, branch)
else:
(out, err) = switchLocalBranch( branch )
cmd = "git pull -u origin"
cmd = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return cmd.communicate()
......@@ -120,33 +179,26 @@ out, err, status = (None, None, None)
before = None
if not os.path.exists(gitconfig):
(out, err) = clone(repo, dest)
(out, err) = clone(repo, dest, branch)
else:
# else do a pull
before = get_version(dest)
(out, err) = pull(repo, dest)
(rc, out, err) = reset(dest)
if rc != 0:
fail_json(out=out, err=err)
(out, err) = pull(repo, dest, branch)
# handle errors from clone or pull
if out.find('error') != -1:
print json.dumps({
"failed" : True,
"out" : out,
"err" : err
})
sys.exit(1)
fail_json(out=out, err=err)
# switch to version specified regardless of whether
# we cloned or pulled
(out, err) = switchver(version, dest)
if err.find('error') != -1:
print json.dumps({
"failed" : True,
"out" : out,
"err" : err
})
sys.exit(1)
fail_json(out=out, err=err)
# determine if we changed anything
......@@ -156,9 +208,4 @@ changed = False
if before != after:
changed = True
print json.dumps({
"changed" : changed,
"before" : before,
"after" : after
})
exit_json(changed=changed, before=before, after=after)
......@@ -26,19 +26,14 @@ import grp
import shlex
import subprocess
import sys
import syslog
GROUPADD = "/usr/sbin/groupadd"
GROUPDEL = "/usr/sbin/groupdel"
GROUPMOD = "/usr/sbin/groupmod"
def debug(msg):
# ansible ignores stderr, so it's safe to use for debug
print >>sys.stderr, msg
#pass
def exit_json(rc=0, **kwargs):
if 'name' in kwargs:
debug("add group info to exit_json")
add_group_info(kwargs)
print json.dumps(kwargs)
sys.exit(rc)
......@@ -59,7 +54,6 @@ def add_group_info(kwargs):
def group_del(group):
cmd = [GROUPDEL, group]
debug("Arguments to groupdel: %s" % (" ".join(cmd)))
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
......@@ -72,8 +66,9 @@ def group_add(group, **kwargs):
if key == 'gid' and kwargs[key] is not None:
cmd.append('-g')
cmd.append(kwargs[key])
elif key == 'system' and kwargs[key] == 'yes':
cmd.append('-r')
cmd.append(group)
debug("Arguments to groupadd: %s" % (" ".join(cmd)))
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
......@@ -91,7 +86,6 @@ def group_mod(group, **kwargs):
if len(cmd) == 1:
return False
cmd.append(group)
debug("Arguments to groupmod: %s" % (" ".join(cmd)))
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
......@@ -138,6 +132,8 @@ if len(sys.argv) == 2 and os.path.exists(sys.argv[1]):
else:
args = ' '.join(sys.argv[1:])
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
if not len(items):
fail_json(msg='the module requires arguments -a')
......@@ -151,9 +147,12 @@ for x in items:
state = params.get('state','present')
name = params.get('name', None)
gid = params.get('gid', None)
system = params.get('system', 'no')
if state not in [ 'present', 'absent' ]:
fail_json(msg='invalid state')
if system not in ['yes', 'no']:
fail_json(msg='invalid system')
if name is None:
fail_json(msg='name is required')
......@@ -165,7 +164,7 @@ if state == 'absent':
exit_json(name=name, changed=changed)
elif state == 'present':
if not group_exists(name):
changed = group_add(name, gid=gid)
changed = group_add(name, gid=gid, system=system)
else:
changed = group_mod(name, gid=gid)
......
......@@ -18,4 +18,5 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
/usr/bin/logger -t ansible-ohai Invoked as-is
/usr/bin/ohai
......@@ -22,4 +22,10 @@ try:
except ImportError:
import simplejson as json
import os
import syslog
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked as-is')
print json.dumps({ "ping" : "pong" })
#!/usr/bin/python
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# hey the Ansible raw module isn't really a remote transferred
# module. All the magic happens in Runner.py, see the web docs
# for more details.
......@@ -21,89 +21,216 @@ try:
import json
except ImportError:
import simplejson as json
import os
import sys
import shlex
import subprocess
import os.path
import syslog
# TODO: switch to fail_json and other helper functions
# like other modules are using
# ===========================================
SERVICE = None
CHKCONFIG = None
def fail_json(d):
print json.dumps(d)
sys.exit(1)
def _find_binaries():
# list of possible paths for service/chkconfig binaries
# with the most probable first
global CHKCONFIG
global SERVICE
paths = ['/sbin', '/usr/sbin', '/bin', '/usr/bin']
binaries = [ 'service', 'chkconfig', 'update-rc.d' ]
location = dict()
for binary in binaries:
location[binary] = None
for binary in binaries:
for path in paths:
if os.path.exists(path + '/' + binary):
location[binary] = path + '/' + binary
break
if location.get('chkconfig', None):
CHKCONFIG = location['chkconfig']
elif location.get('update-rc.d', None):
CHKCONFIG = location['update-rc.d']
else:
fail_json(dict(failed=True, msg='unable to find chkconfig or update-rc.d binary'))
if location.get('service', None):
SERVICE = location['service']
else:
fail_json(dict(failed=True, msg='unable to find service binary'))
def _run(cmd):
# returns (rc, stdout, stderr) from shell command
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = process.communicate()
return (process.returncode, stdout, stderr)
def _do_enable(name, enable):
# we change argument depending on real binary used
# update-rc.d wants enable/disable while
# chkconfig wants on/off
valid_argument = dict({'on' : 'on', 'off' : 'off'})
if CHKCONFIG.endswith("update-rc.d"):
valid_argument['on'] = "enable"
valid_argument['off'] = "disable"
if enable.lower() in ['on', 'true', 'yes', 'enable']:
rc, stdout, stderr = _run("%s %s %s" % (CHKCONFIG, name, valid_argument['on']))
elif enable.lower() in ['off', 'false', 'no', 'disable']:
rc, stdout, stderr = _run("%s %s %s" % (CHKCONFIG, name, valid_argument['off']))
return rc, stdout, stderr
argfile = sys.argv[1]
args = open(argfile, 'r').read()
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
if not len(items):
print json.dumps(dict(failed=True, msg='this module requires arguments (-a)'))
sys.exit(1)
fail_json(dict(failed=True, msg='this module requires arguments (-a)'))
params = {}
for x in items:
(k, v) = x.split("=")
params[k] = v
for arg in items:
if "=" not in arg:
fail_json(dict(failed=True, msg='expected key=value format arguments'))
(name, value) = arg.split("=")
params[name] = value
name = params.get('name', None)
if name is None:
fail_json(dict(failed=True, msg='missing name'))
name = params['name']
state = params.get('state','unknown')
state = params.get('state', None)
list_items = params.get('list', None)
enable = params.get('enabled', params.get('enable', None))
# running and started are the same
if state not in [ 'running', 'started', 'stopped', 'restarted' ]:
print json.dumps(dict(failed=True, msg='invalid state'))
sys.exit(1)
if state and state.lower() not in [ 'running', 'started', 'stopped', 'restarted','reloaded' ]:
fail_json(dict(failed=True, msg='invalid value for state'))
if list_items and list_items.lower() not in [ 'status' ]:
fail_json(dict(failed=True, msg='invalid value for list'))
if enable and enable.lower() not in [ 'on', 'off', 'true', 'false', 'yes', 'no', 'enable', 'disable' ]:
fail_json(dict(failed=True, msg='invalid value for enable'))
# ===========================================
# get service status
# find binaries locations on minion
_find_binaries()
status = os.popen("service %s status" % name).read()
# ===========================================
# determine if we are going to change anything
# get service status
rc, status_stdout, status_stderr = _run("%s %s status" % (SERVICE, name))
status = status_stdout + status_stderr
running = False
if status.find("not running") != -1:
if status_stdout.find("stopped") != -1 or rc == 3:
running = False
elif status.find("running") != -1:
elif status_stdout.find("running") != -1 or rc == 0:
running = True
elif name == 'iptables' and status.find("ACCEPT") != -1:
elif name == 'iptables' and status_stdout.find("ACCEPT") != -1:
# iptables status command output is lame
# TODO: lookup if we can use a return code for this instead?
running = True
changed = False
if not running and state == "started":
changed = True
elif running and state == "stopped":
changed = True
elif state == "restarted":
changed = True
if state or enable:
rc = 0
out = ''
err = ''
changed = False
if enable:
rc_enable, out_enable, err_enable = _do_enable(name, enable)
rc += rc_enable
out += out_enable
err += err_enable
if state:
# a state change command has been requested
# ===========================================
# determine if we are going to change anything
if not running and state in ("started", "running"):
changed = True
elif running and state in ("stopped","reloaded"):
changed = True
elif state == "restarted":
changed = True
# ===========================================
# run change commands if we need to
if changed:
if state in ('started', 'running'):
rc_state, stdout, stderr = _run("%s %s start" % (SERVICE, name))
elif state == 'stopped':
rc_state, stdout, stderr = _run("%s %s stop" % (SERVICE, name))
elif state == 'reloaded':
rc_state, stdout, stderr = _run("%s %s reload" % (SERVICE, name))
elif state == 'restarted':
rc1, stdout1, stderr1 = _run("%s %s stop" % (SERVICE, name))
rc2, stdout2, stderr2 = _run("%s %s start" % (SERVICE, name))
rc_state = rc + rc1 + rc2
stdout = stdout1 + stdout2
stderr = stderr1 + stderr2
out += stdout
err += stderr
rc = rc + rc_state
if rc != 0:
print json.dumps({
"failed" : 1,
"rc" : rc,
})
print >> sys.stderr, out + err
sys.exit(1)
# ===============================================
# success
result = {"changed": changed}
rc, stdout, stderr = _run("%s %s status" % (SERVICE, name))
if list_items and list_items in [ 'status' ]:
result['status'] = stdout
print json.dumps(result)
elif list_items is not None:
# solo list=status mode, don't change anything, just return
# suitable for /usr/bin/ansible usage or API, playbooks
# not so much
# ===========================================
# run change commands if we need to
def _run(cmd):
return subprocess.call(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
rc = 0
if changed:
if state in ('started', 'running'):
rc = _run("service %s start" % name)
elif state == 'stopped':
rc = _run("service %s stop" % name)
elif state == 'restarted':
rc1 = _run("service %s stop" % name)
rc2 = _run("service %s start" % name)
rc = rc1 and rc2
if rc != 0:
# yeah, should probably include output of failure...
print json.dumps({
"failed" : 1,
"rc" : rc
"status" : status
})
sys.exit(1)
# ===============================================
# success
else:
print json.dumps(dict(failed=True, msg="expected state or list parameters"))
print json.dumps({
"changed" : changed
})
sys.exit(0)
......@@ -31,6 +31,7 @@ import socket
import struct
import subprocess
import traceback
import syslog
try:
import json
......@@ -224,7 +225,7 @@ def get_iface_hwaddr(iface):
return ''.join(['%02x:' % ord(char) for char in info[18:24]])[:-1]
def get_network_facts(facts):
facts['fqdn'] = socket.gethostname()
facts['fqdn'] = socket.getfqdn()
facts['hostname'] = facts['fqdn'].split('.')[0]
facts['interfaces'] = get_interfaces()
for iface in facts['interfaces']:
......@@ -241,6 +242,9 @@ def get_network_facts(facts):
facts[iface]['ipv4'] = {}
facts[iface]['ipv4'] = { 'address': data[1].split(':')[1],
'netmask': data[-1].split(':')[1] }
ip = struct.unpack("!L", socket.inet_aton(facts[iface]['ipv4']['address']))[0]
mask = struct.unpack("!L", socket.inet_aton(facts[iface]['ipv4']['netmask']))[0]
facts[iface]['ipv4']['network'] = socket.inet_ntoa(struct.pack("!L", ip & mask))
if 'inet6 addr' in line:
(ip, prefix) = data[2].split('/')
scope = data[3].split(':')[1].lower()
......@@ -263,8 +267,32 @@ def get_public_ssh_host_keys(facts):
else:
facts['ssh_host_key_rsa_public'] = rsa.split()[1]
def get_selinux_facts(facts):
if os.path.exists("/usr/sbin/sestatus"):
cmd = subprocess.Popen("/usr/sbin/sestatus", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = cmd.communicate()
if err == '':
facts['selinux'] = {}
list = out.split("\n")
status = re.search("(enabled|disabled)", list[0])
if status.group() == "enabled":
mode = re.search("(enforcing|disabled|permissive)", list[2])
config_mode = re.search("(enforcing|disabled|permissive)", list[3])
policyvers = re.search("\d+", list[4])
type = re.search("(targeted|strict|mls)", list[5])
facts['selinux']['status'] = status.group()
facts['selinux']['mode'] = mode.group()
facts['selinux']['config_mode'] = config_mode.group()
facts['selinux']['policyvers'] = policyvers.group()
facts['selinux']['type'] = type.group()
elif status.group() == "disabled":
facts['selinux']['status'] = status.group()
else:
facts['selinux'] = False
def get_service_facts(facts):
get_public_ssh_host_keys(facts)
get_selinux_facts(facts)
def ansible_facts():
facts = {}
......@@ -295,7 +323,10 @@ except:
(k,v) = opt.split("=")
setup_options[k]=v
ansible_file = setup_options.get('metadata', DEFAULT_ANSIBLE_SETUP)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % setup_options)
ansible_file = os.path.expandvars(setup_options.get('metadata', DEFAULT_ANSIBLE_SETUP))
ansible_dir = os.path.dirname(ansible_file)
# create the config dir if it doesn't exist
......@@ -362,9 +393,11 @@ md5sum2 = os.popen("md5sum %s" % ansible_file).read().split()[0]
if md5sum != md5sum2:
changed = True
setup_options['written'] = ansible_file
setup_options['changed'] = changed
setup_options['md5sum'] = md5sum2
setup_result = {}
setup_result['written'] = ansible_file
setup_result['changed'] = changed
setup_result['md5sum'] = md5sum2
setup_result['ansible_facts'] = setup_options
print json.dumps(setup_options)
print json.dumps(setup_result)
......@@ -21,6 +21,7 @@ import sys
import os
import shlex
import base64
import syslog
try:
import json
......@@ -36,7 +37,11 @@ if len(sys.argv) == 1:
argfile = sys.argv[1]
if not os.path.exists(argfile):
sys.exit(1)
items = shlex.split(open(argfile, 'r').read())
args = open(argfile, 'r').read()
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
params = {}
for x in items:
......
......@@ -25,22 +25,21 @@ import os
import pwd
import grp
import shlex
import spwd
import subprocess
import sys
import syslog
try:
import spwd
HAVE_SPWD=True
except:
HAVE_SPWD=False
USERADD = "/usr/sbin/useradd"
USERMOD = "/usr/sbin/usermod"
USERDEL = "/usr/sbin/userdel"
def debug(msg):
# ansible ignores stderr, so it's safe to use for debug
print >>sys.stderr, msg
#pass
def exit_json(rc=0, **kwargs):
if 'name' in kwargs:
debug("add user info to exit_json")
add_user_info(kwargs)
print json.dumps(kwargs)
sys.exit(rc)
......@@ -75,7 +74,6 @@ def user_del(user, **kwargs):
elif key == 'remove' and kwargs[key]:
cmd.append('-r')
cmd.append(user)
debug("Arguments to userdel: %s" % (" ".join(cmd)))
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
......@@ -117,14 +115,19 @@ def user_add(user, **kwargs):
cmd.append('-m')
else:
cmd.append('-M')
elif key == 'system' and kwargs[key] == 'yes':
cmd.append('-r')
cmd.append(user)
debug("Arguments to useradd: %s" % (" ".join(cmd)))
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
else:
return False
"""
Without spwd, we would have to resort to reading /etc/shadow
to get the encrypted string. For now, punt on idempotent password changes.
"""
def user_mod(user, **kwargs):
cmd = [USERMOD]
info = user_info(user)
......@@ -141,13 +144,27 @@ def user_mod(user, **kwargs):
cmd.append('-g')
cmd.append(kwargs[key])
elif key == 'groups' and kwargs[key] is not None:
for g in kwargs[key].split(','):
current_groups = user_group_membership(user)
groups = kwargs[key].split(',')
for g in groups:
if not group_exists(g):
fail_json(msg="Group %s does not exist" % (g))
groups = ",".join(user_group_membership(user))
if groups != kwargs[key]:
group_diff = set(sorted(current_groups)).symmetric_difference(set(sorted(groups)))
groups_need_mod = False
if group_diff:
if kwargs['append'] is not None and kwargs['append'] == 'yes':
for g in groups:
if g in group_diff:
cmd.append('-a')
groups_need_mod = True
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(kwargs[key])
cmd.append(','.join(groups))
elif key == 'comment':
if kwargs[key] is not None and info[4] != kwargs[key]:
cmd.append('-c')
......@@ -164,15 +181,10 @@ def user_mod(user, **kwargs):
if kwargs[key] is not None and info[1] != kwargs[key]:
cmd.append('-p')
cmd.append(kwargs[key])
elif key == 'append':
if kwargs[key] is not None and kwargs[key] == 'yes':
if 'groups' in kwargs and kwargs['groups'] is not None:
cmd.append('-a')
# skip if no changes to be made
if len(cmd) == 1:
return False
cmd.append(user)
debug("Arguments to usermod: %s" % (" ".join(cmd)))
rc = subprocess.call(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if rc == 0:
return True
......@@ -223,10 +235,12 @@ def user_info(user):
return False
try:
info = get_pwd_info(user)
sinfo = spwd.getspnam(user)
if HAVE_SPWD:
sinfo = spwd.getspnam(user)
except KeyError:
return False
info[1] = sinfo[1]
if HAVE_SPWD:
info[1] = sinfo[1]
return info
# ===========================================
......@@ -250,6 +264,8 @@ if not os.path.exists(USERDEL):
argfile = sys.argv[1]
args = open(argfile, 'r').read()
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
if not len(items):
fail_json(msg='the module requires arguments -a')
......@@ -278,6 +294,7 @@ remove = params.get('remove', False)
# ===========================================
# following options are specific to useradd
createhome = params.get('createhome', 'yes')
system = params.get('system', 'no')
# ===========================================
# following options are specific to usermod
......@@ -287,6 +304,8 @@ if state not in [ 'present', 'absent' ]:
fail_json(msg='invalid state')
if createhome not in [ 'yes', 'no' ]:
fail_json(msg='invalid createhome')
if system not in ['yes', 'no']:
fail_json(msg='invalid system')
if append not in [ 'yes', 'no' ]:
fail_json(msg='invalid append')
if name is None:
......@@ -302,7 +321,8 @@ elif state == 'present':
if not user_exists(name):
changed = user_add(name, uid=uid, group=group, groups=groups,
comment=comment, home=home, shell=shell,
password=password, createhome=createhome)
password=password, createhome=createhome,
system=system)
else:
changed = user_mod(name, uid=uid, group=group, groups=groups,
comment=comment, home=home, shell=shell,
......
......@@ -27,6 +27,7 @@ except ImportError:
import os
import sys
import subprocess
import syslog
try:
import libvirt
except ImportError:
......@@ -366,6 +367,8 @@ def main():
args = open(argfile, 'r').read()
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
if not len(items):
return VIRT_FAILED, msg
......
......@@ -27,6 +27,7 @@ import datetime
import shlex
import re
import traceback
import syslog
try:
......@@ -57,13 +58,24 @@ def pkg_to_dict(po):
'epoch':po.epoch,
'release':po.release,
'version':po.version,
'repo':po.ui_from_repo,
'_nevra':po.ui_nevra,
}
if type(po) == yum.rpmsack.RPMInstalledPackage:
d['yumstate'] = 'installed'
d['repo'] = 'installed'
else:
d['yumstate'] = 'available'
d['repo'] = po.repoid
if hasattr(po, 'ui_from_repo'):
d['repo'] = po.ui_from_repo
if hasattr(po, 'ui_nevra'):
d['_nevra'] = po.ui_nevra
else:
d['_nevra'] = '%s-%s-%s.%s' % (po.name, po.version, po.release, po.arch)
return d
......@@ -215,6 +227,9 @@ def ensure(my, state, pkgspec):
if state == 'latest':
updates = my.doPackageLists(pkgnarrow='updates', patterns=[pkgspec]).updates
# sucks but this is for rhel5 - won't matter for rhel6 or fedora or whatnot
e,m,u = yum.parsePackages(updates, [pkgspec], casematch=True)
updates = e + m
avail = my.doPackageLists(pkgnarrow='available', patterns=[pkgspec]).available
if not updates and not avail:
if not my.doPackageLists(pkgnarrow='installed', patterns=[pkgspec]).installed:
......@@ -285,6 +300,8 @@ def main():
args = open(argfile, 'r').read()
items = shlex.split(args)
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Invoked with %s' % args)
if not len(items):
msg = "the yum module requires arguments (-a)"
......
ansible (0.4) debian; urgency=low
* 0.4 update
-- Michael DeHaan <michael.dehaan@gmail.com> Mon, 23 May 2012 19:40:00 -0400
ansible (0.3) debian; urgency=low
* 0.3 update
......
Name: ansible
Release: 1%{?dist}
Summary: Minimal SSH command and control
Version: 0.3.1
Version: 0.4
Group: Development/Libraries
License: GPLv3+
......@@ -46,8 +46,13 @@ cp -v library/* $RPM_BUILD_ROOT/%{_datadir}/ansible/
%doc %{_mandir}/man1/ansible*
%changelog
<<<<<<< HEAD
* Tue May 1 2012 Tim Bielawa <tbielawa@redhat.com> - 0.3.1-1
- Release of 0.3.1. Mostly packaging related changes.
=======
* Wed May 23 2012 Michael DeHaan <michael.dehaan@gmail.com> - 0.4-0
- Release of 0.4
>>>>>>> devel
* Mon Apr 23 2012 Michael DeHaan <michael.dehaan@gmail.com> - 0.3-1
- Release of 0.3
......
......@@ -74,12 +74,15 @@ class TestCallbacks(object):
def on_play_start(self, pattern):
EVENTS.append([ 'play start', [ pattern ]])
def on_async_confused(self, msg):
EVENTS.append([ 'async confused', [ msg ]])
def on_async_ok(self, host, res, jid):
EVENTS.append([ 'async ok', [ host ]])
def on_async_poll(self, jid, host, clock, host_result):
def on_async_poll(self, host, res, jid, clock):
EVENTS.append([ 'async poll', [ host ]])
def on_async_failed(self, host, res, jid):
EVENTS.append([ 'async failed', [ host ]])
def on_unreachable(self, host, msg):
EVENTS.append([ 'failed/dark', [ host, msg ]])
......@@ -141,7 +144,7 @@ class TestPlaybook(unittest.TestCase):
runner_callbacks = self.test_callbacks
)
result = self.playbook.run()
print utils.bigjson(dict(events=EVENTS))
# print utils.bigjson(dict(events=EVENTS))
return result
def test_one(self):
......@@ -166,5 +169,6 @@ class TestPlaybook(unittest.TestCase):
# make sure the template module took options from the vars section
data = file('/tmp/ansible_test_data_template.out').read()
print data
assert data.find("ears") != -1, "template success"
# -*- coding: utf-8 -*-
import os
import unittest
import ansible.utils
class TestUtils(unittest.TestCase):
#####################################
### varLookup function tests
def test_varLookup_list(self):
vars = {
'data': {
'who': ['joe', 'jack', 'jeff']
}
}
res = ansible.utils.varLookup('data.who', vars)
assert sorted(res) == sorted(vars['data']['who'])
#####################################
### varReplace function tests
def test_varReplace_simple(self):
template = 'hello $who'
vars = {
'who': 'world',
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello world'
def test_varReplace_multiple(self):
template = '$what $who'
vars = {
'what': 'hello',
'who': 'world',
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello world'
def test_varReplace_caps(self):
template = 'hello $whoVar'
vars = {
'whoVar': 'world',
}
res = ansible.utils.varReplace(template, vars)
print res
assert res == 'hello world'
def test_varReplace_middle(self):
template = 'hello $who!'
vars = {
'who': 'world',
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello world!'
def test_varReplace_alternative(self):
template = 'hello ${who}'
vars = {
'who': 'world',
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello world'
def test_varReplace_almost_alternative(self):
template = 'hello $who}'
vars = {
'who': 'world',
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello world}'
def test_varReplace_almost_alternative2(self):
template = 'hello ${who'
vars = {
'who': 'world',
}
res = ansible.utils.varReplace(template, vars)
assert res == template
def test_varReplace_alternative_greed(self):
template = 'hello ${who} }'
vars = {
'who': 'world',
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello world }'
def test_varReplace_notcomplex(self):
template = 'hello $mydata.who'
vars = {
'data': {
'who': 'world',
},
}
res = ansible.utils.varReplace(template, vars)
print res
assert res == template
def test_varReplace_nested(self):
template = 'hello ${data.who}'
vars = {
'data': {
'who': 'world'
},
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello world'
def test_varReplace_nested_int(self):
template = '$what ${data.who}'
vars = {
'data': {
'who': 2
},
'what': 'hello',
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello 2'
def test_varReplace_unicode(self):
template = 'hello $who'
vars = {
'who': u'wórld',
}
res = ansible.utils.varReplace(template, vars)
assert res == u'hello wórld'
def test_varReplace_list(self):
template = 'hello ${data[1]}'
vars = {
'data': [ 'no-one', 'world' ]
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello world'
def test_varReplace_invalid_list(self):
template = 'hello ${data[1}'
vars = {
'data': [ 'no-one', 'world' ]
}
res = ansible.utils.varReplace(template, vars)
assert res == template
def test_varReplace_list_oob(self):
template = 'hello ${data[2]}'
vars = {
'data': [ 'no-one', 'world' ]
}
res = ansible.utils.varReplace(template, vars)
assert res == template
def test_varReplace_list_nolist(self):
template = 'hello ${data[1]}'
vars = {
'data': { 'no-one': 0, 'world': 1 }
}
res = ansible.utils.varReplace(template, vars)
assert res == template
def test_varReplace_nested_list(self):
template = 'hello ${data[1].msg[0]}'
vars = {
'data': [ 'no-one', {'msg': [ 'world'] } ]
}
res = ansible.utils.varReplace(template, vars)
assert res == 'hello world'
#####################################
### Template function tests
def test_template_basic(self):
template = 'hello {{ who }}'
vars = {
'who': 'world',
}
res = ansible.utils.template(template, vars, {}, no_engine=False)
assert res == 'hello world'
def test_template_whitespace(self):
template = 'hello {{ who }}\n'
vars = {
'who': 'world',
}
res = ansible.utils.template(template, vars, {}, no_engine=False)
assert res == 'hello world\n'
def test_template_unicode(self):
template = 'hello {{ who }}'
vars = {
'who': u'wórld',
}
res = ansible.utils.template(template, vars, {}, no_engine=False)
assert res == u'hello wórld'
#####################################
### key-value parsing
def test_parse_kv_basic(self):
assert (ansible.utils.parse_kv('a=simple b="with space" c="this=that"') ==
{'a': 'simple', 'b': 'with space', 'c': 'this=that'})
---
duck: quack
cow: moo
extguard: " '$favcolor' == 'blue' "
# order of groups, children, and vars is not signficant
# so this example mixes them up for maximum testing
[nc:children]
rtp
triangle
[eastcoast:children]
nc
florida
[us:children]
eastcoast
[redundantgroup]
rtp_a
[redundantgroup2]
rtp_a
[redundantgroup3:children]
rtp
[redundantgroup:vars]
rga=1
[redundantgroup2:vars]
rgb=2
[redundantgroup3:vars]
rgc=3
[nc:vars]
b=10000
c=10001
d=10002
[rtp]
rtp_a
rtp_b
rtb_c
[rtp:vars]
a=1
b=2
c=3
[triangle]
tri_a
tri_b
tri_c
[triangle:vars]
a=11
b=12
c=13
[florida]
orlando
miami
[florida:vars]
a=100
b=101
c=102
[eastcoast:vars]
b=100000
c=100001
d=100002
[us:vars]
c=1000000
---
# this is an annotated example of some features available in playbooks
# it shows how to make sure packages are updated, how to make sure
# services are running, and how to template files. It also demos
# change handlers that can restart things (or trigger other actions)
# when resources change. For more advanced examples, see example2.yml
# on all hosts, run as the user root...
- hosts: all
user: root
# make these variables available inside of templates
# for when we use the 'template' action/module later on...
vars:
http_port: 80
max_clients: 200
# define the tasks that are part of this play...
tasks:
# task #1 is to run an arbitrary command
# we'll simulate a long running task, wait for up to 45 seconds, poll every 5
# obviously this does nothing useful but you get the idea
- name: longrunner
action: command /bin/sleep 15
async: 45
poll: 5
# let's demo file operations.
#
# We can 'copy' files or 'template' them instead, using jinja2
# as the templating engine. This is done using the variables
# from the vars section above mixed in with variables bubbled up
# automatically from tools like facter and ohai. 'copy'
# works just like 'template' but does not do variable subsitution.
#
# If and only if the file changes, restart apache at the very
# end of the playbook run
- name: write some_random_foo configuration
action: template src=templates/foo.j2 dest=/etc/some_random_foo.conf
notify:
- restart apache
# make sure httpd is installed at the latest version
- name: install httpd
action: yum pkg=httpd state=latest
# make sure httpd is running
- name: httpd start
action: service name=httpd state=running
# handlers are only run when things change, at the very end of each
# play. Let's define some. The names are significant and must
# match the 'notify' sections above
handlers:
# this particular handler is run when some_random_foo.conf
# is changed, and only then
- name: restart apache
action: service name=httpd state=restarted
---
# this is an annotated example of some features available in playbooks
# it shows how to make sure packages are updated, how to make sure
# services are running, and how to template files. It also demos
# change handlers that can restart things (or trigger other actions)
# when resources change. For more advanced examples, see example2.yml
# on all hosts, run as the user root...
- hosts: all
user: root
# make these variables available inside of templates
# for when we use the 'template' action/module later on...
vars:
http_port: 80
max_clients: 200
# define the tasks that are part of this play...
tasks:
# task #1 is to run an arbitrary command
# we'll simulate a long running task, wait for up to 45 seconds, poll every 5
# obviously this does nothing useful but you get the idea
- name: longrunner
action: command /bin/sleep 15
async: 45
poll: 5
# let's demo file operations.
#
# We can 'copy' files or 'template' them instead, using jinja2
# as the templating engine. This is done using the variables
# from the vars section above mixed in with variables bubbled up
# automatically from tools like facter and ohai. 'copy'
# works just like 'template' but does not do variable subsitution.
#
# If and only if the file changes, restart apache at the very
# end of the playbook run
- name: write some_random_foo configuration
action: template src=templates/foo.j2 dest=/etc/some_random_foo.conf
notify:
- restart apache
# make sure httpd is installed at the latest version
- name: install httpd
action: yum pkg=httpd state=latest
# make sure httpd is running
- name: httpd start
action: service name=httpd state=running
# handlers are only run when things change, at the very end of each
# play. Let's define some. The names are significant and must
# match the 'notify' sections above
handlers:
# this particular handler is run when some_random_foo.conf
# is changed, and only then
- name: restart apache
action: service name=httpd state=restarted
......@@ -6,7 +6,9 @@
moon: titan
moon2: enceladus
- zeus
- host: zeus
vars:
- ansible_ssh_port: 3001
- group: greek
hosts:
......@@ -25,6 +27,11 @@
- odin
- loki
- group: ruler
hosts:
- zeus
- odin
- group: multiple
hosts:
- saturn
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment