We’ve already set up passwordless SSH between the control node and managed nodes when we provisioned the infrastructure using Terraform. Let’s look at how we did that to understand it better.
We created an Azure Virtual Network (VNet), a subnetwork (subnet), and three Azure VMs called control-node, web, and db within that subnet. If we look at the VM resource configuration, we also have a custom_data field that can be used to pass an initialization script to the VM, as follows:
resource “azurerm_virtual_machine” “control_node” {
name = “ansible-control-node”
…
os_profile {
…
custom_data = base64encode(data.template_file.control_node_init.rendered)
}
}
resource “azurerm_virtual_machine” “web” {
name = “web”
…
os_profile {
…
custom_data = base64encode(data.template_file.managed_nodes_init.rendered)
}
}
resource “azurerm_virtual_machine” “db” {
name = “db”
…
os_profile {
…
custom_data = base64encode(data.template_file.managed_nodes_init.rendered)
}
}
As we can see, the control_node VM refers to a data.template_file.control_node_init resource, and the web and db nodes refer to a data.template_file.managed_nodes_init resource. These are template_file resources that can be used for template files. Let’s look at the resources as follows:
data “template_file” “managed_nodes_init” {
template = file(“managed-nodes-user-data.sh”)
vars = {
admin_password = var.admin_password
}
}
data “template_file” “control_node_init” {
template = file(“control-node-user-data.sh”)
vars = {
admin_password = var.admin_password
}
}
As we can see, the managed_nodes_init resource points to the managed-nodes-user-data. sh file and passes an admin_password variable to that file. Similarly, the control_node_init resource points to the control-node-user-data.sh file. Let’s look at the managed-nodes-user-data.sh file first:
sudo useradd -m ansible
echo ‘ansible ALL=(ALL) NOPASSWD:ALL’ | sudo tee -a /etc/sudoers
sudo su – ansible << EOF
ssh-keygen -t rsa -N ” -f ~/.ssh/id_rsa
printf “${admin_password}\n${admin_password}” | sudo passwd ansible EOF
As we can see, it is a shell script that does the following:
- Creates an ansible user.
- Adds the user to the sudoers list.
- Generates an ssh key pair for passwordless authentication.
- Sets the password for the ansible user.
As we’ve generated the ssh key pair, we would need to do the same within the control node with some additional configuration. Let’s look at the control-node-user-data.sh script, as follows:
sudo useradd -m ansible
echo ‘ansible ALL=(ALL) NOPASSWD:ALL’ | sudo tee -a /etc/sudoers
sudo su – ansible << EOF ssh-keygen -t rsa -N ” -f ~/.ssh/id_rsa sleep 120 ssh-keyscan -H web >> ~/.ssh/known_hosts
ssh-keyscan -H db >> ~/.ssh/known_hosts
sudo apt update -y && sudo apt install -y sshpass
echo “${admin_password}” | sshpass ssh-copy-id ansible@web echo “${admin_password}” | sshpass ssh-copy-id ansible@db EOF
The script does the following:
- Creates an ansible user
- Adds the user to the sudoers list
- Generates an ssh key pair for passwordless authentication
- Adds the web and db VMs to the known_hosts file to ensure we trust both hosts
- Installs the sshpass utility to allow for sending the ssh public key to the web and db VMs
- Copies the ssh public key to the web and db VMs for passwordless connectivity
These files get executed automatically when the VMs are created; therefore, passwordless SSH should already be working. So, let’s use an SSH client to log in to ansible-control-node using the IP address we got in the last step. We will use the username and password we configured in the
terraform.tfvars file:
$ ssh [email protected]
Once you are in the control node server, switch the user to ansible and try doing an SSH to the web server using the following commands:
$ sudo su – ansible
$ ssh web
And if you land on the web server, passwordless authentication is working correctly.
Repeat the same steps to check whether you can connect with the db server.
Exit the prompts until you are in the control node.
Now, as we’re in the control node, let’s install Ansible.
Leave a Reply