322 lines
14 KiB
Markdown
322 lines
14 KiB
Markdown
---
|
|
page_title: "Ansible - Provisioning"
|
|
sidebar_current: "provisioning-ansible"
|
|
---
|
|
|
|
# Ansible Provisioner
|
|
|
|
**Provisioner name: `"ansible"`**
|
|
|
|
The ansible provisioner allows you to provision the guest using
|
|
[Ansible](http://ansible.com) playbooks by executing `ansible-playbook` from the Vagrant host.
|
|
|
|
Ansible playbooks are [YAML](http://en.wikipedia.org/wiki/YAML) documents that
|
|
comprise the set of steps to be orchestrated on one or more machines. This documentation
|
|
page will not go into how to use Ansible or how to write Ansible playbooks, since Ansible
|
|
is a complete deployment and configuration management system that is beyond the scope of
|
|
a single page of documentation.
|
|
|
|
<div class="alert alert-warn">
|
|
<p>
|
|
<strong>Warning:</strong> If you're not familiar with Ansible and Vagrant already,
|
|
I recommend starting with the <a href="/v2/provisioning/shell.html">shell
|
|
provisioner</a>. However, if you're comfortable with Vagrant already, Vagrant
|
|
is a great way to learn Ansible.
|
|
</p>
|
|
</div>
|
|
|
|
## Setup Requirements
|
|
|
|
* [Install Ansible](http://docs.ansible.com/intro_installation.html#installing-the-control-machine) on your Vagrant host.
|
|
* Your Vagrant host should ideally provide a recent version of OpenSSH that [supports ControlPersist](http://docs.ansible.com/faq.html#how-do-i-get-ansible-to-reuse-connections-enable-kerberized-ssh-or-have-ansible-pay-attention-to-my-local-ssh-config-file)
|
|
|
|
## Inventory File
|
|
|
|
When using Ansible, it needs to know on which machines a given playbook should run. It does
|
|
this by way of an [inventory](http://docs.ansible.com/intro_inventory.html) file which lists those machines.
|
|
In the context of Vagrant, there are two ways to approach working with inventory files.
|
|
|
|
### Auto-Generated Inventory
|
|
|
|
The first and simplest option is to not provide one to Vagrant at all. Vagrant will generate an
|
|
inventory file encompassing all of the virtual machines it manages, and use it for provisioning
|
|
machines. The generated inventory file is stored as part of your local Vagrant environment in `.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory`.
|
|
|
|
**Groups of Hosts**
|
|
|
|
The `ansible.groups` option can be used to pass a hash of group names and group members to be included in the generated inventory file.
|
|
|
|
With this configuration example:
|
|
|
|
```
|
|
ansible.groups = {
|
|
"group1" => ["machine1"],
|
|
"group2" => ["machine2"],
|
|
"all_groups:children" => ["group1", "group2"]
|
|
}
|
|
```
|
|
|
|
Vagrant would generate an inventory file that might look like:
|
|
|
|
```
|
|
# Generated by Vagrant
|
|
|
|
machine1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=/home/.../.vagrant/machines/machine1/virtualbox/private_key
|
|
machine2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201 ansible_ssh_private_key_file=/home/.../.vagrant/machines/machine2/virtualbox/private_key
|
|
|
|
[group1]
|
|
machine1
|
|
|
|
[group2]
|
|
machine2
|
|
|
|
[all_groups:children]
|
|
group1
|
|
group2
|
|
```
|
|
|
|
**Notes**
|
|
|
|
* The generation of group variables blocks (e.g. `[group1:vars]`) are intentionally not supported, as it is [not recommended to store group variables in the main inventory file](http://docs.ansible.com/intro_inventory.html#splitting-out-host-and-group-specific-data). A good practice is to store these group (or host) variables in `YAML` files stored in `group_vars/` or `host_vars/` directories in the playbook (or inventory) directory.
|
|
* Unmanaged machines and undefined groups are not added to the inventory, to avoid useless Ansible errors (e.g. *unreachable host* or *undefined child group*)
|
|
* Prior to Vagrant 1.7.3, the `ansible_ssh_private_key_file` variable was not set in generated inventory, but passed as command line argument to `ansible-playbook` command.
|
|
|
|
For example, `machine3`, `group3` and `group1:vars` in the example below would not be added to the generated inventory file:
|
|
|
|
```
|
|
ansible.groups = {
|
|
"group1" => ["machine1"],
|
|
"group2" => ["machine2", "machine3"],
|
|
"all_groups:children" => ["group1", "group2", "group3"],
|
|
"group1:vars" => { "variable1" => 9, "variable2" => "example" }
|
|
}
|
|
```
|
|
|
|
### Static Inventory
|
|
|
|
The second option is for situations where you'd like to have more control over the inventory management.
|
|
With the `ansible.inventory_path` option, you can reference a specific inventory resource (e.g. a static inventory file, a [dynamic inventory script](http://docs.ansible.com/intro_dynamic_inventory.html) or even [multiple inventories stored in the same directory](http://docs.ansible.com/intro_dynamic_inventory.html#using-multiple-inventory-sources)). Vagrant will then use this inventory information instead of generating it.
|
|
|
|
A very simple inventory file for use with Vagrant might look like:
|
|
|
|
```
|
|
default ansible_ssh_host=192.168.111.222
|
|
```
|
|
|
|
Where the above IP address is one set in your Vagrantfile:
|
|
|
|
```
|
|
config.vm.network :private_network, ip: "192.168.111.222"
|
|
```
|
|
|
|
**Notes:**
|
|
|
|
* The machine names in `Vagrantfile` and `ansible.inventory_path` files should correspond, unless you use `ansible.limit` option to reference the correct machines.
|
|
* The SSH host addresses (and ports) must obviously be specified twice, in `Vagrantfile` and `ansible.inventory_path` files.
|
|
|
|
## Playbook
|
|
|
|
The second component of a successful Ansible provisioner setup is the Ansible playbook
|
|
which contains the steps that should be run on the guest. Ansible's
|
|
[playbook documentation](http://docs.ansible.com/playbooks.html) goes into great
|
|
detail on how to author playbooks, and there are a number of
|
|
[best practices](http://docs.ansible.com/playbooks_best_practices.html) that can be applied to use
|
|
Ansible's powerful features effectively. A playbook that installs and starts (or restarts
|
|
if it was updated) the NTP daemon via YUM looks like:
|
|
|
|
```
|
|
---
|
|
- hosts: all
|
|
tasks:
|
|
- name: ensure ntpd is at the latest version
|
|
yum: pkg=ntp state=latest
|
|
notify:
|
|
- restart ntpd
|
|
handlers:
|
|
- name: restart ntpd
|
|
service: name=ntpd state=restarted
|
|
```
|
|
|
|
You can of course target other operating systems that don't have YUM by changing the
|
|
playbook tasks. Ansible ships with a number of [modules](http://docs.ansible.com/modules.html)
|
|
that make running otherwise tedious tasks dead simple.
|
|
|
|
## Running Ansible
|
|
|
|
To run Ansible against your Vagrant guest, the basic Vagrantfile configuration looks like:
|
|
|
|
```ruby
|
|
Vagrant.configure("2") do |config|
|
|
config.vm.provision "ansible" do |ansible|
|
|
ansible.playbook = "playbook.yml"
|
|
end
|
|
end
|
|
```
|
|
|
|
Since an Ansible playbook can include many files, you may also collect the related files in
|
|
a directory structure like this:
|
|
|
|
```
|
|
$ tree
|
|
.
|
|
|-- Vagrantfile
|
|
|-- provisioning
|
|
| |-- group_vars
|
|
| |-- all
|
|
| |-- playbook.yml
|
|
```
|
|
|
|
In such an arrangement, the `ansible.playbook` path should be adjusted accordingly:
|
|
|
|
```ruby
|
|
Vagrant.configure("2") do |config|
|
|
config.vm.provision "ansible" do |ansible|
|
|
ansible.playbook = "provisioning/playbook.yml"
|
|
end
|
|
end
|
|
```
|
|
|
|
Vagrant will try to run the `playbook.yml` playbook against all machines defined in your Vagrantfile.
|
|
|
|
**Backward Compatibility Note**:
|
|
|
|
Up to Vagrant 1.4, the Ansible provisioner could potentially connect (multiple times) to all hosts from the inventory file.
|
|
This behaviour is still possible by setting `ansible.limit = 'all'` (see more details below).
|
|
|
|
## Additional Options
|
|
|
|
The Ansible provisioner also includes a number of additional options that can be set,
|
|
all of which get passed to the `ansible-playbook` command that ships with Ansible.
|
|
|
|
* `ansible.extra_vars` can be used to pass additional variables (with highest priority) to the playbook. This parameter can be a path to a JSON or YAML file, or a hash. For example:
|
|
|
|
```
|
|
ansible.extra_vars = {
|
|
ntp_server: "pool.ntp.org",
|
|
nginx: {
|
|
port: 8008,
|
|
workers: 4
|
|
}
|
|
}
|
|
```
|
|
These variables take the highest precedence over any other variables.
|
|
* `ansible.sudo` can be set to `true` to cause Ansible to perform commands using sudo.
|
|
* `ansible.sudo_user` can be set to a string containing a username on the guest who should be used
|
|
by the sudo command.
|
|
* `ansible.ask_sudo_pass` can be set to `true` to require Ansible to prompt for a sudo password.
|
|
* `ansible.ask_vault_pass` can be set to `true` to require Ansible to prompt for a vault password.
|
|
* `ansible.vault_password_file` can be set to a string containing the path of a file containing the password used by Ansible Vault.
|
|
* `ansible.limit` can be set to a string or an array of machines or groups from the inventory file to further control which hosts are affected. Note that:
|
|
* As of Vagrant 1.5, the machine name (taken from Vagrantfile) is set as **default limit** to ensure that `vagrant provision` steps only affect the expected machine. Setting `ansible.limit` will override this default.
|
|
* Setting `ansible.limit = 'all'` can be used to make Ansible connect to all machines from the inventory file.
|
|
* `ansible.verbose` can be set to increase Ansible's verbosity to obtain detailed logging:
|
|
* `'v'`, verbose mode
|
|
* `'vv'`
|
|
* `'vvv'`, more
|
|
* `'vvvv'`, connection debugging
|
|
* `ansible.tags` can be set to a string or an array of tags. Only plays, roles and tasks tagged with these values will be executed.
|
|
* `ansible.skip_tags` can be set to a string or an array of tags. Only plays, roles and tasks that *do not match* these values will be executed.
|
|
* `ansible.start_at_task` can be set to a string corresponding to the task name where the playbook provision will start.
|
|
* `ansible.raw_arguments` can be set to an array of strings corresponding to a list of `ansible-playbook` arguments (e.g. `['--check', '-M /my/modules']`). It is an *unsafe wildcard* that can be used to apply Ansible options that are not (yet) supported by this Vagrant provisioner. As of Vagrant 1.7, `raw_arguments` has the highest priority and its values can potentially override or break other Vagrant settings.
|
|
* `ansible.raw_ssh_args` can be set to an array of strings corresponding to a list of OpenSSH client parameters (e.g. `['-o ControlMaster=no']`). It is an *unsafe wildcard* that can be used to pass additional SSH settings to Ansible via `ANSIBLE_SSH_ARGS` environment variable.
|
|
* `ansible.host_key_checking` can be set to `true` which will enable host key checking. As of Vagrant 1.5, the default value is `false` and as of Vagrant 1.7 the user known host file (e.g. `~/.ssh/known_hosts`) is no longer read nor modified. In other words: by default, the Ansible provisioner behaves the same as Vagrant native commands (e.g `vagrant ssh`).
|
|
|
|
## Tips and Tricks
|
|
|
|
### Ansible Parallel Execution
|
|
|
|
Vagrant is designed to provision [multi-machine environments](/v2/multi-machine) in sequence, but the following configuration pattern can be used to take advantage of Ansible parallelism:
|
|
|
|
```
|
|
# Vagrant 1.7+ automatically inserts a different
|
|
# insecure keypair for each new VM created. The easiest way
|
|
# to use the same keypair for all the machines is to disable
|
|
# this feature and rely on the legacy insecure key.
|
|
# config.ssh.insert_key = false
|
|
#
|
|
# Note:
|
|
# As of Vagrant 1.7.3, it is no longer necessary to disable
|
|
# the keypair creation when using the auto-generated inventory.
|
|
|
|
N = 3
|
|
(1..N).each do |machine_id|
|
|
config.vm.define "machine#{machine_id}" do |machine|
|
|
|
|
machine.vm.hostname = "machine#{machine_id}"
|
|
machine.vm.network "private_network", ip: "192.168.77.#{20+machine_id}"
|
|
|
|
# Only execute once the Ansible provisioner,
|
|
# when all the machines are up and ready.
|
|
if machine_id == N
|
|
machine.vm.provision :ansible do |ansible|
|
|
|
|
# Disable default limit to connect to all the machines
|
|
ansible.limit = 'all'
|
|
ansible.playbook = "playbook.yml"
|
|
|
|
end
|
|
end
|
|
|
|
end
|
|
end
|
|
```
|
|
|
|
**Caveats:**
|
|
|
|
If you apply this parallel provisioning pattern with a static Ansible inventory, you'll have to organize the things so that [all the relevant private keys are provided to the `ansible-playbook` command](https://github.com/mitchellh/vagrant/pull/5765#issuecomment-120247738). The same kind of considerations applies if you are using multiple private keys for a same machine (see [`config.ssh.private_key_path` SSH setting](/v2/vagrantfile/ssh_settings.html)).
|
|
|
|
### Provide a local `ansible.cfg` file
|
|
|
|
Certain settings in Ansible are (only) adjustable via a [configuration file](http://docs.ansible.com/intro_configuration.html), and you might want to ship such a file in your Vagrant project.
|
|
|
|
As `ansible-playbook` command looks for local `ansible.cfg` configuration file in its *current directory* (but not in the directory that contains the main playbook), you have to store this file adjacent to your Vagrantfile.
|
|
|
|
Note that it is also possible to reference an Ansible configuration file via `ANSIBLE_CONFIG` environment variable, if you want to be flexible about the location of this file.
|
|
|
|
### Why does the Ansible provisioner connect as the wrong user?
|
|
|
|
It is good to know that the following Ansible settings always override the `config.ssh.username` option defined in [Vagrant SSH Settings](/v2/vagrantfile/ssh_settings.html):
|
|
|
|
* `ansible_ssh_user` variable
|
|
* `remote_user` (or `user`) play attribute
|
|
* `remote_user` task attribute
|
|
|
|
Be aware that copying snippets from the Ansible documentation might lead to this problem, as `root` is used as the remote user in many [examples](http://docs.ansible.com/playbooks_intro.html#hosts-and-users).
|
|
|
|
Example of an SSH error (with `vvv` log level), where an undefined remote user `xyz` has replaced `vagrant`:
|
|
|
|
```
|
|
TASK: [my_role | do something] *****************
|
|
<127.0.0.1> ESTABLISH CONNECTION FOR USER: xyz
|
|
<127.0.0.1> EXEC ['ssh', '-tt', '-vvv', '-o', 'ControlMaster=auto',...
|
|
fatal: [ansible-devbox] => SSH encountered an unknown error. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue.
|
|
```
|
|
|
|
In a situation like the above, to override the `remote_user` specified in a play you can use the following line in your Vagrantfile `vm.provision` block:
|
|
|
|
```
|
|
ansible.extra_vars = { ansible_ssh_user: 'vagrant' }
|
|
```
|
|
|
|
### Force Paramiko Connection Mode
|
|
|
|
The Ansible provisioner is implemented with native OpenSSH support in mind, and there is no official support for [paramiko](https://github.com/paramiko/paramiko/) (A native Python SSHv2 protocol library).
|
|
|
|
If you really need to use this connection mode, it is though possible to enable paramiko as illustrated in the following configuration examples:
|
|
|
|
|
|
With auto-generated inventory:
|
|
|
|
```
|
|
ansible.raw_arguments = ["--connection=paramiko"]
|
|
```
|
|
|
|
With a custom inventory, the private key must be specified (e.g. via an `ansible.cfg` configuration file, `--private-key` argument, or as part of your inventory file):
|
|
|
|
```
|
|
ansible.inventory_path = "./my-inventory"
|
|
ansible.raw_arguments = [
|
|
"--connection=paramiko",
|
|
"--private-key=/home/.../.vagrant/machines/.../private_key"
|
|
]
|
|
``` |