Modules: The Tools in the Toolbox
We've discussed that Modules are the scripts Ansible pushes to remote servers to perform actual work. Ansible ships with hundreds of built-in "Core" modules.
You don't need to memorize them all—the official Ansible documentation is your best friend—but you should be familiar with the most common categories.
Common Core Modules
| Category | Module | What it does |
|---|---|---|
| System | apt / dnf | Installs/removes packages natively on Debian/RHEL systems. |
service / systemd | Starts, stops, and enables system background services. | |
user / group | Creates and manages Linux users, passwords, and groups. | |
cron | Manages cron jobs (scheduled tasks). | |
| Files | copy | Copies files from the Control Node to the Managed Nodes. |
file | Sets permissions, ownership, or creates empty directories/symlinks. | |
lineinfile | Ensures a specific line of text exists in a file (without overwriting the whole file). | |
template | Evaluates a template file with Jinja2 variables and copies the result to the node. | |
| Commands | command | Executes a standard terminal command safely. |
shell | Executes a command using /bin/sh, allowing piping and redirects. |
Tasks: Wrapping Modules in Logic
A Task is simply a call to a Module, wrapped in metadata (like a name, privilege escalation, or conditionals). Tasks are the building blocks of Plays.
Tasks execute sequentially, top to bottom.
Idempotence in Tasks
Remember Idempotence? Let's look at how tasks achieve it.
Task: Create a Directory
- name: Ensure application data directory exists
file:
path: /opt/my_app/data
state: directory
owner: dev_user
mode: '0755'- Run 1: Ansible sees
/opt/my_app/datadoes not exist. It creates the folder, sets ownership, and reportschanged. - Run 2: Ansible sees the folder exists and has the correct ownership. It does nothing and reports
ok. - Run 3 (Someone manually changed the owner to
root): Ansible sees the folder exists, but the owner is wrong. It changes the owner back todev_userand reportschanged.
By defining state: directory, you never write mkdir logic. Ansible handles the "how".
The Problem with Service Restarts
Consider a classic playbook pattern: You copy a configuration file to a server, and then you restart the application service so it loads the new configuration.
# A Flawed Playbook
tasks:
- name: Copy Nginx config
copy:
src: nginx.conf
dest: /etc/nginx/nginx.conf
- name: Restart Nginx to apply changes
service:
name: nginx
state: restarted # ❌ This violates idempotence!Why is this flawed?
The state: restarted command tells Ansible to unconditionally restart Nginx. If you run this playbook every hour as part of a compliance check, Ansible will reboot your web server every single hour, violently dropping active user connections, even if the configuration file never changed!
You only want to restart the service IF the configuration file was modified during that specific playbook run.
Enter Handlers.
Handlers: Intelligent Reactions
A Handler is just a Task, but it does not execute sequentially. Instead, it waits invisibly at the end of the play. It ONLY executes if another Task explicitly "notifies" it that a change occurred.
Writing a Handler
Handlers are defined in their own block, peer to the tasks block.
---
- name: Web Server Configuration
hosts: webservers
become: yes
tasks:
- name: Copy Nginx config
copy:
src: nginx.conf
dest: /etc/nginx/nginx.conf
notify: Restart Nginx Service # 🔔 The trigger! Matches the handler name exactly.
# Handlers block definition
handlers:
- name: Restart Nginx Service # 🎯 The Handler (Waiting)
service:
name: nginx
state: restartedHow Handlers Execute
- Task runs. Ansible copies the file.
- If the file was modified, the copy task reports
changed, and fires thenotifytrigger. - If the file was identical, the copy task reports
ok, and thenotifytrigger is ignored. - At the very end of the playbook, Ansible flushes all triggered handlers.
Multiple Triggers, One Restart
What if you modify the nginx.conf file AND update the SSL certificates? You notify the handler in both tasks!
Ansible is smart. Even if a handler is notified 10 times by 10 different tasks, the handler will only run exactly once at the end of the play. This prevents your service from bouncing up and down repeatedly during a deployment.