Hey there! In our previous post, we explored why automation is such a game-changer in the DevOps world. Now that you're convinced automation is worth your time (and trust me, it is!), let's roll up our sleeves and learn how to actually do it. Welcome to the second installment of our "Mastering Automation in DevOps" series!
Why Bash and Python?
Before we dive into code, let's talk about why these two languages specifically. In the vast DevOps toolkit, Bash and Python stand out as the Swiss Army knives that every practitioner should master.
Bash is the default shell for most Linux distributions and macOS. It's already there, waiting for you to harness its power. Since most servers run on Linux, knowing Bash is non-negotiable for anyone in DevOps.
Python, on the other hand, has become the de facto language for automation because of its readability, vast library ecosystem, and gentle learning curve. From infrastructure management to data processing, Python can handle it all.
Together, these languages form a powerful combo that can automate virtually any DevOps task. But when should you use which?
- Use Bash when: You're working directly with the operating system, managing files, or running simple sequences of commands.
- Use Python when: You need more complex logic, error handling, API interactions, or when working with data.
Getting Started with Bash
Setting Up Your Environment
If you're on macOS or Linux, congratulations! Bash is already installed. Windows users have a few options:
- Install Windows Subsystem for Linux (WSL)
- Use Git Bash
- Try Cygwin
To check your Bash version, open a terminal and type:
bash --version
Your First Bash Script
Let's create a simple script that checks if a website is up. Create a file named check_site.sh
and add:
#!/bin/bash
# This script checks if a website is up
echo "Checking if $1 is up..."
if curl -s --head "$1" | grep "200 OK" > /dev/null; then
echo "✅ $1 is UP!"
else
echo "❌ $1 is DOWN!"
fi
To make it executable:
chmod +x check_site.sh
To run it:
./check_site.sh https://devopshorizon.com
Bash Basics Every DevOps Engineer Should Know
- Variables: Store and reuse values
NAME="DevOps Horizon"
echo "Welcome to $NAME"
- Conditionals: Make decisions in your scripts
if [ "$STATUS" == "success" ]; then
echo "Deployment successful!"
else
echo "Deployment failed!"
fi
- Loops: Repeat actions
for server in server1 server2 server3; do
ssh user@$server "sudo apt update"
done
- Functions: Create reusable blocks of code
deploy() {
echo "Deploying to $1..."
# deployment logic here
}
deploy "production"
Getting Started with Python
Setting Up Your Environment
Unlike Bash, Python requires installation on most systems. Download it from python.org or use your system's package manager.
For DevOps work, I recommend setting up a virtual environment for each project:
# Create a virtual environment
python -m venv myproject_env
# Activate it (Linux/macOS)
source myproject_env/bin/activate
# Activate it (Windows)
myproject_env\Scripts\activate
Your First Python Script
Let's create a script that performs the same website check, but with more features:
#!/usr/bin/env python3
# This script checks if multiple websites are up
import requests
import sys
import time
def check_website(url):
try:
response = requests.get(url, timeout=5)
if response.status_code == 200:
return True
return False
except requests.RequestException:
return False
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python check_sites.py [url1] [url2] ...")
sys.exit(1)
sites = sys.argv[1:]
for site in sites:
if not site.startswith(('http://', 'https://')):
site = 'https://' + site
status = "UP ✅" if check_website(site) else "DOWN ❌"
print(f"{site} is {status}")
To run it:
python check_sites.py devopshorizon.com google.com nonexistentwebsite123456.com
Python Basics for DevOps Automation
- Libraries: Python's superpower is its vast ecosystem
# For AWS automation
import boto3
# For HTTP requests
import requests
# For working with APIs
import json
- File Operations: Read and write configuration files
# Read a config file
with open('config.json', 'r') as file:
config = json.load(file)
# Write to a log file
with open('deployment.log', 'a') as log:
log.write(f"{time.ctime()}: Deployment started\n")
- Error Handling: Gracefully handle exceptions
try:
response = api.create_instance(config)
print("Instance created successfully!")
except Exception as e:
print(f"Failed to create instance: {e}")
send_alert("Instance creation failed", str(e))
Real-World DevOps Automation Examples
Example 1: Server Health Check with Bash
This script checks CPU, memory, and disk usage and alerts if thresholds are exceeded:
#!/bin/bash
# Server health monitoring script
# Define thresholds
CPU_THRESHOLD=80
MEMORY_THRESHOLD=80
DISK_THRESHOLD=90
# Get current usage
CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2 + $4}')
MEMORY_USAGE=$(free | grep Mem | awk '{print $3/$2 * 100.0}')
DISK_USAGE=$(df -h / | grep / | awk '{print $5}' | tr -d '%')
# Check CPU
if (( $(echo "$CPU_USAGE > $CPU_THRESHOLD" | bc -l) )); then
echo "WARNING: CPU usage is at $CPU_USAGE%"
fi
# Check memory
if (( $(echo "$MEMORY_USAGE > $MEMORY_THRESHOLD" | bc -l) )); then
echo "WARNING: Memory usage is at $MEMORY_USAGE%"
fi
# Check disk
if [ "$DISK_USAGE" -gt "$DISK_THRESHOLD" ]; then
echo "WARNING: Disk usage is at $DISK_USAGE%"
fi
Example 2: Automated Deployment with Python
A simplified version of a deployment script:
#!/usr/bin/env python3
# Simple deployment script
import os
import subprocess
import time
import requests
def log(message):
timestamp = time.strftime("%Y-%m-%d %H:%M:%S")
print(f"[{timestamp}] {message}")
def notify_slack(message):
webhook_url = os.getenv("SLACK_WEBHOOK")
if webhook_url:
requests.post(webhook_url, json={"text": message})
def run_command(command):
log(f"Running: {command}")
try:
result = subprocess.run(command, shell=True, check=True,
capture_output=True, text=True)
log(f"Success: {result.stdout.strip()}")
return True
except subprocess.CalledProcessError as e:
log(f"Error: {e.stderr.strip()}")
return False
def deploy():
log("Starting deployment")
# Pull latest code
if not run_command("git pull origin main"):
notify_slack("❌ Deployment failed at git pull stage")
return False
# Install dependencies
if not run_command("npm install"):
notify_slack("❌ Deployment failed at npm install stage")
return False
# Build application
if not run_command("npm run build"):
notify_slack("❌ Deployment failed at build stage")
return False
# Restart service
if not run_command("pm2 restart app"):
notify_slack("❌ Deployment failed at restart stage")
return False
log("Deployment completed successfully")
notify_slack("✅ Deployment completed successfully!")
return True
if __name__ == "__main__":
deploy()
Example 3: Combining Bash and Python
Let's create a backup solution that uses both languages:
- A Python script to identify what needs backing up:
#!/usr/bin/env python3
# backup_prep.py - Identifies files for backup
import os
import json
import datetime
def find_files_to_backup(directories, max_age_days=7):
files_to_backup = []
cutoff_time = datetime.datetime.now() - datetime.timedelta(days=max_age_days)
cutoff_timestamp = cutoff_time.timestamp()
for directory in directories:
for root, _, files in os.walk(directory):
for file in files:
file_path = os.path.join(root, file)
mod_time = os.path.getmtime(file_path)
if mod_time >= cutoff_timestamp:
files_to_backup.append(file_path)
return files_to_backup
if __name__ == "__main__":
dirs_to_backup = ["/var/www", "/etc/nginx"]
backup_list = find_files_to_backup(dirs_to_backup)
with open("/tmp/backup_list.json", "w") as f:
json.dump(backup_list, f)
print(f"Found {len(backup_list)} files to backup")
- A Bash script to perform the actual backup:
#!/bin/bash
# backup.sh - Creates backup from list
BACKUP_DIR="/var/backups/$(date +%Y-%m-%d)"
BACKUP_LIST="/tmp/backup_list.json"
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Load file list
FILES=$(cat "$BACKUP_LIST" | jq -r '.[]')
# Loop through files and back them up
for FILE in $FILES; do
# Create directory structure
DEST_DIR="$BACKUP_DIR/$(dirname "$FILE")"
mkdir -p "$DEST_DIR"
# Copy file
cp -p "$FILE" "$DEST_DIR/"
echo "Backed up: $FILE"
done
# Create archive
tar -czf "$BACKUP_DIR.tar.gz" "$BACKUP_DIR"
# Clean up
rm -rf "$BACKUP_DIR"
rm "$BACKUP_LIST"
echo "Backup completed: $BACKUP_DIR.tar.gz"
Best Practices for DevOps Scripting
-
Make Scripts Idempotent: Your scripts should be safe to run multiple times without causing harm.
-
Add Proper Error Handling: Always include error checking and appropriate exit codes.
-
Document Your Code: Include comments explaining what the script does and how to use it.
-
Use Version Control: Keep your scripts in a Git repository for tracking changes.
-
Implement Logging: Add logging to help with troubleshooting.
-
Keep Security in Mind: Never hardcode sensitive information like passwords in scripts.
-
Test in a Safe Environment: Always test your scripts in a non-production environment first.
Next Steps in Your DevOps Automation Journey
Congrats! You've taken your first steps into DevOps automation with Bash and Python. But this is just the beginning. Here's what you can do to continue growing:
-
Practice regularly: Identify manual tasks in your workflow and try to automate them.
-
Learn more advanced features: Explore more complex Bash features like process substitution or Python libraries like Paramiko for SSH automation.
-
Contribute to open source: Find automation projects on GitHub and contribute to them.
-
Create a personal automation library: Build a collection of scripts you can reuse across projects.
In the next post of our series, we'll dive deeper into configuration management with tools like Ansible, Chef, and Puppet. These tools build on the scripting foundations we've laid today, taking your automation capabilities to the next level.
Remember, automation isn't just about making your job easier—it's about creating reliable, repeatable processes that help your entire team deliver better software faster.
What manual tasks are you hoping to automate with your new scripting skills? Drop a comment below, and let's discuss how Bash and Python might help!
This post is part of our "Mastering Automation in DevOps" series. Check out the first post if you missed it, and stay tuned for the next installment on configuration management.