Try our new research platform with insights from 80,000+ expert users

ANSIBLE DYNAMIC INVENTORY AUTOMATION

Loveday Philip Obi - PeerSpot reviewer

Project Description

PROJECT: ANSIBLE DYNAMIC INVENTORY AUTOMATION

#aws #asible #linux #ubuntu

This project is useful in patching over 100 dynamic servers running in auto-scaling group, with just a click.

- In AWS Console:

* In this project I started by creating an IAM Role, giving it EC2FullAccess

* Then I spin up Ubuntu Server as the Control Node and 4 Linux Servers as the Managed Nodes. I tagged the manage nodes as: 'Env: Dev', and attached the IAM Role I initially created to all Servers, then I configured the Security Group: allowing port 22-ssh and port 80-http from Anywhere. (NB: don't allow from anywhere in Production Env).

- In CLI:

* I ssh into the Ubuntu Control Node, then I passed the following commands below. NB: I used the 'apt' module relatively ubuntu servers.

* sudo apt update -y (to update the server)

* sudo apt install -y ansible (to install Ansible)

* sudo apt install python-is-python3 (to intall python SDKs for AWS, this will connect with AWS to get ip addresses)

* sudo apt-get install python3-pip -y (to install 'pip' the boto3 module, with it we can install boto3)

* sudo pip3 install boto (to install boto)

* sudo pip3 install boto3 (to install boto3)

* ansible-galaxy collection install amazon.aws (intall Amazon EC2 Plugin)

* vi aws_ec2.yamli -(I executed this command to create and set up YAML document file to act as a dynamic inventory file)

* Refer to 'DYNAMIC INVENTORY' in the Architecture to view the command that will use the aws_ec2 plugin to pick up all the managed hosts tagged 'Env: Dev', present in us-west-2 region

* sudo vi /etc/ansible/ansible.cfg (to edit the ansible static inventory)

* Then inside the ansible.cfg file I searched :/ for #sudo_user and #host_key_checking and uncomment them, then :wq! to close and save.

* ansible-inventory -i aws_ec2.yaml --list (to search and pull the 'Env: Dev' tagged instances into the control node)

* vi avi-jes-oregon.pem (Open your private keypair with any code editor (I used Atom), copy the keypair then paste and save it in this pem file)

* chmod 400 avi-jes-oregon.pem (to change the mode and execute keypair connections to the Managed Hosts)

* ansible aws_ec2 -i aws_ec2.yaml -m ping --private-key=avi-jes-oregon.pem --user ec2-user

(I ran this adhoc command 'ping module' to test the connections)

*ansible aws_ec2 -i aws_ec2.yaml -m yum -a 'name=git state=present' --private-key=avi-jes-oregon.pem --become --user ec2-uservi

(I ran this 'yum module' command in the sudo mode to install git on the Linux target nodes)

*vi test-playbook.yaml (to create a playbook file then I wrote my mini play in it, using the YAML Language). Refer to the Architecture Diagram, to view the PlayBook

* ansible-playbook -i aws_ec2.yaml -l aws_ec2 test-playbook.yaml --private-key=avi-jes-oregon.pem --user ec2-user (to push and test the playbook in the managed nodes)

* And then I copied the public ip of each of the 4 managed nodes to test in Google Search, response was affirmative 😉

  • Johannesburg (ZA)-26.202328.0436