Ansible. My own orchestra!

What is Ansible? Ansible’s GitHub repository’s description says, that Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Well, it is. But it’s just words, so let’s dive into Ansible to see how it works.

First, we need a test environment. I’m going to use Vagrant (and Virtualbox “under” it). We need a couple of host, so I just took a Vagrantfile from one of my previous posts:

Vagrant.configure("2") do |config|

  config.vm.define "host1" do |host1|
    host1.vm.box = "centos/7"
    host1.vm.network "private_network", ip: "192.168.33.10"
  end

  config.vm.define "host2" do |host2|
    host2.vm.box = "centos/7"
    host2.vm.network "private_network", ip: "192.168.33.20"
  end

end

After Vagrant done with VM’s check if they are available:

$ ping -c 1 -w 1 192.168.33.10 1>/dev/null && echo "host1 up!" || echo "host1 not available"
$ ping -c 1 -w 1 192.168.33.20 1>/dev/null && echo "host2 up!" || echo "host2 not available"

You shoud see something like this: host1 up! host2 up!

The main Ansible’s advantage (one of many) is work without an agent on client. So we need to install ansible only on a master host. In my Fedora I can do it with:

# dnf install ansible
# ansible --version

Now look at our master node’s /etc/ansible directory:

$ ls /etc/ansible
ansible.cfg  hosts  roles

ansible.cfg is a main config file. Keep in mind, that ansible will look at config in the following order: ANSIBLE_CONFIG, ansible.cfg in the current working directory, .ansible.cfg in the home directory or /etc/ansible/ansible.cfg, whichever it finds first. For now we will edit only on file - hosts. We need our file to look like that:

$ cat /etc/ansible/hosts
<...default comments omitted...>
[easthosts]
192.168.33.10

[westhosts]
192.168.33.20

[easthosts:vars]
ansible_user=ansible

[westhosts:vars]
ansible_user=ansible

We’ve just added two host groups (yes, with one host in each). After saving a file we need to make ssh-keygen and ssh-copy-id from master node to “slaves”. Yes, Ansible works via SSH. Ssh-keygen will ask you a couple of questions. But which user to use? I prefer to create new ones.

$ vagrant ssh host1
$ sudo -i
# useradd ansible
# passwd ansible
# exit
$ exit
$ vagrant ssh host2
$ sudo -i
# useradd ansible
# passwd ansible

And just in case: if you use centos (like in this example) you should edit /etc/ssh/sshd_config with adding “PasswordAuthentication yes”.

Ok, now we’re ready to copy our key

ssh-copy-id ansible@192.168.33.10
ssh-copy-id ansible@192.168.33.20

Ok, time to

Run our first ad-hoc command

It’s very easy, just run

ansible all -a "cat /etc/redhat-release"

And you will see that:

Ansible output

Nice! And of course you may run it separately for each group:

$ ansible westhosts -a "ls /etc/ssh"

Ok, pretty easy. Now let’s try to play with modules.

$ ansible all -m ping

Just a ping.

Ansible module

Let’s try to create file only on westhosts servers:

Ansible file module

On the picture we see, that ‘state=’ parameter is response for file state (touch or absent).

Ansible has huge numbers of modules. All list HERE

Install package

Let’s pick some of them, yum for example. Assume, that we need some package to be installed on our servers. We know which package manager is used by our distributives, so we pick module and run it with correct parameter

ansible all -b -K -m yum -a "name=mc state=latest"

Ansible error And… Doesn’t work. Why? Because we (I mean user “ansible”) are not in “slave’s” sudoers list. Fix it like this (works in CentOS):

$ vagrant ssh host1
$ sudo -i
# usermod -aG wheel ansible
# exit
$ exit

Repeat these steps for host2

Try to run ansible again, and all should work as expected. Now login on host1 and check it:

Ansible work

And one last thing - delete package. It’s very easy. Just change “state=” to “absent”.

ansible all -b -K -m yum -a "name=mc state=absent"

Well, I think that’s enough for today. Next time I’ll write about playbooks. Stay tuned.

Written on August 9, 2017