525 lines
11 KiB
Markdown
525 lines
11 KiB
Markdown
# Setting Up Ansible with Tailscale for Remote Server Management
|
|
|
|
## Overview
|
|
|
|
This guide walks you through setting up Ansible to manage remote servers (like ThinkCentre units) using Tailscale for secure networking. This approach provides reliable remote access without complex port forwarding or VPN configurations.
|
|
|
|
In plainer language; this allows you to manage several Changemaker nodes remotely. If you are a full time campaigner, this can enable you to manage several campaigns infrastructure from a central location while each user gets their own Changemaker box.
|
|
|
|
## What You'll Learn
|
|
|
|
- How to set up Ansible for infrastructure automation
|
|
- How to configure secure remote access using Tailscale
|
|
- How to troubleshoot common SSH and networking issues
|
|
- Why this approach is better than alternatives like Cloudflare Tunnels for simple SSH access
|
|
|
|
## Prerequisites
|
|
|
|
- **Master Node**: Your main computer running Ubuntu/Linux (control machine)
|
|
- **Target Nodes**: Remote servers/ThinkCentres running Ubuntu/Linux
|
|
- **Both machines**: Must have internet access
|
|
- **User Account**: Same username on all machines (recommended)
|
|
|
|
## Part 1: Initial Setup on Master Node
|
|
|
|
### 1. Create Ansible Directory Structure
|
|
|
|
```bash
|
|
# Create project directory
|
|
mkdir ~/ansible_quickstart
|
|
cd ~/ansible_quickstart
|
|
|
|
# Create directory structure
|
|
mkdir -p group_vars host_vars roles playbooks
|
|
```
|
|
|
|
### 2. Install Ansible
|
|
|
|
```bash
|
|
sudo apt update
|
|
sudo apt install ansible
|
|
```
|
|
|
|
### 3. Generate SSH Keys (if not already done)
|
|
|
|
```bash
|
|
# Generate SSH key pair
|
|
ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa
|
|
|
|
# Display public key (save this for later)
|
|
cat ~/.ssh/id_rsa.pub
|
|
```
|
|
|
|
## Part 2: Target Node Setup (Physical Access Required Initially)
|
|
|
|
### 1. Enable SSH on Target Node
|
|
|
|
Access each target node physically (monitor + keyboard):
|
|
|
|
```bash
|
|
# Update system
|
|
sudo apt update && sudo apt upgrade -y
|
|
|
|
# Install and enable SSH
|
|
sudo apt install openssh-server
|
|
sudo systemctl enable ssh
|
|
sudo systemctl start ssh
|
|
|
|
# Check SSH status
|
|
sudo systemctl status ssh
|
|
```
|
|
|
|
**Note**: If you get "Unit ssh.service could not be found", you need to install the SSH server first:
|
|
|
|
```bash
|
|
# Install OpenSSH server
|
|
sudo apt install openssh-server
|
|
|
|
# Then start and enable SSH
|
|
sudo systemctl start ssh
|
|
sudo systemctl enable ssh
|
|
|
|
# Verify SSH is running and listening
|
|
sudo ss -tlnp | grep :22
|
|
```
|
|
|
|
You should see SSH listening on port 22.
|
|
|
|
### 2. Configure SSH Key Authentication
|
|
|
|
```bash
|
|
# Create .ssh directory
|
|
mkdir -p ~/.ssh
|
|
chmod 700 ~/.ssh
|
|
|
|
# Create authorized_keys file
|
|
nano ~/.ssh/authorized_keys
|
|
```
|
|
|
|
Paste your public key from the master node, then:
|
|
|
|
```bash
|
|
# Set proper permissions
|
|
chmod 600 ~/.ssh/authorized_keys
|
|
```
|
|
|
|
### 3. Configure SSH Security
|
|
|
|
```bash
|
|
# Edit SSH config
|
|
sudo nano /etc/ssh/sshd_config
|
|
```
|
|
|
|
Ensure these lines are uncommented:
|
|
|
|
```
|
|
PubkeyAuthentication yes
|
|
AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2
|
|
```
|
|
|
|
```bash
|
|
# Restart SSH service
|
|
sudo systemctl restart ssh
|
|
```
|
|
|
|
### 4. Configure Firewall
|
|
|
|
```bash
|
|
# Check firewall status
|
|
sudo ufw status
|
|
|
|
# Allow SSH through firewall
|
|
sudo ufw allow ssh
|
|
|
|
# Fix home directory permissions (required for SSH keys)
|
|
chmod 755 ~/
|
|
```
|
|
|
|
## Part 3: Test Local SSH Connection
|
|
|
|
Before proceeding with remote access, test SSH connectivity locally:
|
|
|
|
```bash
|
|
# From master node, test SSH to target
|
|
ssh username@<target-local-ip>
|
|
```
|
|
|
|
**Common Issues and Solutions:**
|
|
|
|
- **Connection hangs**: Check firewall rules (`sudo ufw allow ssh`)
|
|
- **Permission denied**: Verify SSH keys and file permissions
|
|
- **SSH config errors**: Ensure `PubkeyAuthentication yes` is set
|
|
|
|
## Part 4: Set Up Tailscale for Remote Access
|
|
|
|
### Why Tailscale Over Alternatives
|
|
|
|
We initially tried Cloudflare Tunnels but encountered complexity with:
|
|
|
|
- DNS routing issues
|
|
- Complex configuration for SSH
|
|
- Same-network testing problems
|
|
- Multiple configuration approaches with varying success
|
|
|
|
**Tailscale is superior because:**
|
|
|
|
- Zero configuration mesh networking
|
|
- Works from any network
|
|
- Persistent IP addresses
|
|
- No port forwarding needed
|
|
- Free for personal use
|
|
|
|
### 1. Install Tailscale on Master Node
|
|
|
|
```bash
|
|
# Install Tailscale
|
|
curl -fsSL https://tailscale.com/install.sh | sh
|
|
|
|
# Connect to Tailscale network
|
|
sudo tailscale up
|
|
```
|
|
|
|
Follow the authentication URL to connect with your Google/Microsoft/GitHub account.
|
|
|
|
### 2. Install Tailscale on Target Nodes
|
|
|
|
**On each target node:**
|
|
|
|
```bash
|
|
# Install Tailscale
|
|
curl -fsSL https://tailscale.com/install.sh | sh
|
|
|
|
# Connect to Tailscale network
|
|
sudo tailscale up
|
|
```
|
|
|
|
Authenticate each device through the provided URL.
|
|
|
|
### 3. Get Tailscale IP Addresses
|
|
|
|
**On each machine:**
|
|
|
|
```bash
|
|
# Get your Tailscale IP
|
|
tailscale ip -4
|
|
```
|
|
|
|
Each device receives a persistent IP like `100.x.x.x`.
|
|
|
|
## Part 5: Configure Ansible
|
|
|
|
### 1. Create Inventory File
|
|
|
|
```bash
|
|
# Create inventory.ini
|
|
cd ~/ansible_quickstart
|
|
nano inventory.ini
|
|
```
|
|
|
|
**Content:**
|
|
|
|
```ini
|
|
[thinkcenter]
|
|
tc-node1 ansible_host=100.x.x.x ansible_user=your-username
|
|
tc-node2 ansible_host=100.x.x.x ansible_user=your-username
|
|
|
|
[all:vars]
|
|
ansible_ssh_private_key_file=~/.ssh/id_rsa
|
|
ansible_host_key_checking=False
|
|
```
|
|
|
|
Replace:
|
|
|
|
- `100.x.x.x` with actual Tailscale IPs
|
|
- `your-username` with your actual username
|
|
|
|
### 2. Test Ansible Connectivity
|
|
|
|
```bash
|
|
# Test connection to all nodes
|
|
ansible all -i inventory.ini -m ping
|
|
```
|
|
|
|
**Expected output:**
|
|
|
|
```
|
|
tc-node1 | SUCCESS => {
|
|
"changed": false,
|
|
"ping": "pong"
|
|
}
|
|
```
|
|
|
|
## Part 6: Create and Run Playbooks
|
|
|
|
### 1. Simple Information Gathering Playbook
|
|
|
|
```bash
|
|
mkdir -p playbooks
|
|
nano playbooks/info-playbook.yml
|
|
```
|
|
|
|
**Content:**
|
|
|
|
```yaml
|
|
---
|
|
- name: Gather Node Information
|
|
hosts: all
|
|
tasks:
|
|
- name: Get system information
|
|
setup:
|
|
|
|
- name: Display basic system info
|
|
debug:
|
|
msg: |
|
|
Hostname: {{ ansible_hostname }}
|
|
Operating System: {{ ansible_distribution }} {{ ansible_distribution_version }}
|
|
Architecture: {{ ansible_architecture }}
|
|
Memory: {{ ansible_memtotal_mb }}MB
|
|
CPU Cores: {{ ansible_processor_vcpus }}
|
|
|
|
- name: Show disk usage
|
|
command: df -h /
|
|
register: disk_info
|
|
|
|
- name: Display disk usage
|
|
debug:
|
|
msg: "Root filesystem usage: {{ disk_info.stdout_lines[1] }}"
|
|
|
|
- name: Check uptime
|
|
command: uptime
|
|
register: uptime_info
|
|
|
|
- name: Display uptime
|
|
debug:
|
|
msg: "System uptime: {{ uptime_info.stdout }}"
|
|
```
|
|
|
|
### 2. Run the Playbook
|
|
|
|
```bash
|
|
ansible-playbook -i inventory.ini playbooks/info-playbook.yml
|
|
```
|
|
|
|
## Part 7: Advanced Playbook Example
|
|
|
|
### System Setup Playbook
|
|
|
|
```bash
|
|
nano playbooks/setup-node.yml
|
|
```
|
|
|
|
**Content:**
|
|
|
|
```yaml
|
|
---
|
|
- name: Setup ThinkCentre Node
|
|
hosts: all
|
|
become: yes
|
|
tasks:
|
|
- name: Update package cache
|
|
apt:
|
|
update_cache: yes
|
|
|
|
- name: Install essential packages
|
|
package:
|
|
name:
|
|
- htop
|
|
- vim
|
|
- curl
|
|
- git
|
|
- docker.io
|
|
state: present
|
|
|
|
- name: Add user to docker group
|
|
user:
|
|
name: "{{ ansible_user }}"
|
|
groups: docker
|
|
append: yes
|
|
|
|
- name: Create management directory
|
|
file:
|
|
path: /opt/management
|
|
state: directory
|
|
owner: "{{ ansible_user }}"
|
|
group: "{{ ansible_user }}"
|
|
```
|
|
|
|
## Troubleshooting Guide
|
|
|
|
### SSH Issues
|
|
|
|
**Problem: SSH connection hangs**
|
|
|
|
- Check firewall: `sudo ufw status` and `sudo ufw allow ssh`
|
|
- Verify SSH service: `sudo systemctl status ssh`
|
|
- Test local connectivity first
|
|
|
|
**Problem: Permission denied (publickey)**
|
|
|
|
- Check SSH key permissions: `chmod 600 ~/.ssh/authorized_keys`
|
|
- Verify home directory permissions: `chmod 755 ~/`
|
|
- Ensure SSH config allows key auth: `PubkeyAuthentication yes`
|
|
|
|
**Problem: Bad owner or permissions on SSH config**
|
|
|
|
```bash
|
|
chmod 600 ~/.ssh/config
|
|
```
|
|
|
|
### Ansible Issues
|
|
|
|
**Problem: Host key verification failed**
|
|
|
|
- Add to inventory: `ansible_host_key_checking=False`
|
|
|
|
**Problem: Ansible command not found**
|
|
|
|
```bash
|
|
sudo apt install ansible
|
|
```
|
|
|
|
**Problem: Connection timeouts**
|
|
|
|
- Verify Tailscale connectivity: `ping <tailscale-ip>`
|
|
- Check if both nodes are connected: `tailscale status`
|
|
|
|
### Tailscale Issues
|
|
|
|
**Problem: Can't connect to Tailscale IP**
|
|
|
|
- Verify both devices are authenticated: `tailscale status`
|
|
- Check Tailscale is running: `sudo systemctl status tailscaled`
|
|
- Restart Tailscale: `sudo tailscale up`
|
|
|
|
## Scaling to Multiple Nodes
|
|
|
|
### Adding New Nodes
|
|
|
|
1. **Install Tailscale on new node**
|
|
2. **Set up SSH access** (repeat Part 2)
|
|
3. **Add to inventory.ini:**
|
|
|
|
```ini
|
|
[thinkcenter]
|
|
tc-node1 ansible_host=100.125.148.60 ansible_user=bunker-admin
|
|
tc-node2 ansible_host=100.x.x.x ansible_user=bunker-admin
|
|
tc-node3 ansible_host=100.x.x.x ansible_user=bunker-admin
|
|
```
|
|
|
|
### Group Management
|
|
|
|
```ini
|
|
[webservers]
|
|
tc-node1 ansible_host=100.x.x.x ansible_user=bunker-admin
|
|
tc-node2 ansible_host=100.x.x.x ansible_user=bunker-admin
|
|
|
|
[databases]
|
|
tc-node3 ansible_host=100.x.x.x ansible_user=bunker-admin
|
|
|
|
[all:vars]
|
|
ansible_ssh_private_key_file=~/.ssh/id_rsa
|
|
ansible_host_key_checking=False
|
|
```
|
|
|
|
Run playbooks on specific groups:
|
|
|
|
```bash
|
|
ansible-playbook -i inventory.ini -l webservers playbook.yml
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### Security
|
|
|
|
- Use SSH keys, not passwords
|
|
- Keep Tailscale client updated
|
|
- Regular security updates via Ansible
|
|
- Use `become: yes` only when necessary
|
|
|
|
### Organization
|
|
|
|
```
|
|
ansible_quickstart/
|
|
├── inventory.ini
|
|
├── group_vars/
|
|
├── host_vars/
|
|
├── roles/
|
|
└── playbooks/
|
|
├── info-playbook.yml
|
|
├── setup-node.yml
|
|
└── maintenance.yml
|
|
```
|
|
|
|
### Monitoring and Maintenance
|
|
|
|
Create regular maintenance playbooks:
|
|
|
|
```yaml
|
|
- name: System maintenance
|
|
hosts: all
|
|
become: yes
|
|
tasks:
|
|
- name: Update all packages
|
|
apt:
|
|
upgrade: dist
|
|
update_cache: yes
|
|
|
|
- name: Clean package cache
|
|
apt:
|
|
autoclean: yes
|
|
autoremove: yes
|
|
```
|
|
|
|
## Alternative Approaches We Considered
|
|
|
|
### Cloudflare Tunnels
|
|
|
|
- **Pros**: Good for web services, handles NAT traversal
|
|
- **Cons**: Complex SSH setup, DNS routing issues, same-network problems
|
|
- **Use case**: Better for web applications than SSH access
|
|
|
|
### Traditional VPN
|
|
|
|
- **Pros**: Full network access
|
|
- **Cons**: Complex setup, port forwarding required, router configuration
|
|
- **Use case**: When you control the network infrastructure
|
|
|
|
### SSH Reverse Tunnels
|
|
|
|
- **Pros**: Simple concept
|
|
- **Cons**: Requires VPS, single point of failure, manual setup
|
|
- **Use case**: Temporary access or when other methods fail
|
|
|
|
## Conclusion
|
|
|
|
This setup provides:
|
|
|
|
- **Reliable remote access** from anywhere
|
|
- **Secure mesh networking** with Tailscale
|
|
- **Infrastructure automation** with Ansible
|
|
- **Easy scaling** to multiple nodes
|
|
- **No complex networking** required
|
|
|
|
The combination of Ansible + Tailscale is ideal for managing distributed infrastructure without the complexity of traditional VPN setups or the limitations of cloud-specific solutions.
|
|
|
|
## Quick Reference Commands
|
|
|
|
```bash
|
|
# Check Tailscale status
|
|
tailscale status
|
|
|
|
# Test Ansible connectivity
|
|
ansible all -i inventory.ini -m ping
|
|
|
|
# Run playbook on all hosts
|
|
ansible-playbook -i inventory.ini playbook.yml
|
|
|
|
# Run playbook on specific group
|
|
ansible-playbook -i inventory.ini -l groupname playbook.yml
|
|
|
|
# Run single command on all hosts
|
|
ansible all -i inventory.ini -m command -a "uptime"
|
|
|
|
# SSH to node via Tailscale
|
|
ssh username@100.x.x.x
|
|
``` |