sharenet/CI_CD_PIPELINE_SETUP_GUIDE.md
continuist 718343a3d0
Some checks are pending
CI/CD Pipeline / Test Backend (push) Waiting to run
CI/CD Pipeline / Test Frontend (push) Waiting to run
CI/CD Pipeline / Build and Push Docker Images (push) Blocked by required conditions
CI/CD Pipeline / Deploy to Production (push) Blocked by required conditions
Updated procedure to have project cloned to /opt/APP_NAME to be with registry
2025-06-28 19:51:45 -04:00

31 KiB

CI/CD Pipeline Setup Guide

This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments.

Architecture Overview

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Forgejo Host  │    │   CI/CD Linode  │    │ Production Linode│
│   (Repository)  │    │ (Actions Runner)│    │ (Docker Deploy) │
│                 │    │ + Docker Registry│   │                 │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         │                       │                       │
         └─────────── Push ──────┼───────────────────────┘
                                 │
                                 └─── Deploy ────────────┘

Pipeline Flow

  1. Code Push: Developer pushes code to Forgejo repository
  2. Automated Testing: CI/CD Linode runs tests on backend and frontend
  3. Image Building: If tests pass, Docker images are built
  4. Registry Push: Images are pushed to private registry on CI/CD Linode
  5. Production Deployment: Production Linode pulls images and deploys
  6. Health Check: Application is verified and accessible

Prerequisites

  • Two Ubuntu 24.04 LTS Linodes with root access
  • Basic familiarity with Linux commands and SSH
  • Forgejo repository with Actions enabled
  • Optional: Domain name for Production Linode (for SSL/TLS)

Quick Start

  1. Set up CI/CD Linode (Steps 1-13)
  2. Set up Production Linode (Steps 14-26)
  3. Configure SSH key exchange (Step 27)
  4. Set up Forgejo repository secrets (Step 28)
  5. Test the complete pipeline (Step 29)

What's Included

CI/CD Linode Features

  • Forgejo Actions runner for automated builds
  • Local Docker registry for image storage
  • Registry web UI for image management
  • Automated cleanup of old images
  • Secure SSH communication with production

Production Linode Features

  • Docker-based application deployment
  • Optional SSL/TLS certificate management (if domain is provided)
  • Nginx reverse proxy with security headers
  • Automated backups and monitoring
  • Firewall and fail2ban protection

Pipeline Features

  • Automated testing on every code push
  • Automated image building and registry push
  • Automated deployment to production
  • Rollback capability with image versioning
  • Health monitoring and logging

Security Model and User Separation

This setup uses a principle of least privilege approach with separate users for different purposes:

User Roles

  1. Root User

    • Purpose: Initial system setup only
    • SSH Access: Disabled after setup
    • Privileges: Full system access (used only during initial configuration)
  2. Deployment User (DEPLOY_USER)

    • Purpose: SSH access, deployment tasks, system administration
    • SSH Access: Enabled with key-based authentication
    • Privileges: Sudo access for deployment and administrative tasks
    • Examples: deploy, ci, admin
  3. Service Account (SERVICE_USER)

    • Purpose: Running application services (Docker containers, databases)
    • SSH Access: None (no login shell)
    • Privileges: No sudo access, minimal system access
    • Examples: appuser, service, app

Security Benefits

  • No root SSH access: Eliminates the most common attack vector
  • Principle of least privilege: Each user has only the access they need
  • Separation of concerns: Deployment tasks vs. service execution are separate
  • Audit trail: Clear distinction between deployment and service activities
  • Reduced attack surface: Service account has minimal privileges

File Permissions

  • Application files: Owned by SERVICE_USER for security
  • Docker operations: Run by DEPLOY_USER with sudo (deployment only)
  • Service execution: Run by SERVICE_USER (no sudo needed)

Prerequisites and Initial Setup

What's Already Done (Assumptions)

This guide assumes you have already:

  1. Created two Ubuntu 24.04 LTS Linodes with root access
  2. Set root passwords for both Linodes
  3. Have SSH client installed on your local machine
  4. Have Forgejo repository with Actions enabled
  5. Optional: Domain name pointing to Production Linode's IP addresses

Step 0: Initial SSH Access and Verification

Before proceeding with the setup, you need to establish initial SSH access to both Linodes.

0.1 Get Your Linode IP Addresses

From your Linode dashboard, note the IP addresses for:

  • CI/CD Linode: YOUR_CI_CD_IP (IP address only, no domain needed)
  • Production Linode: YOUR_PRODUCTION_IP (IP address for SSH, domain for web access)

0.2 Test Initial SSH Access

Test SSH access to both Linodes:

# Test CI/CD Linode (IP address only)
ssh root@YOUR_CI_CD_IP

# Test Production Linode (IP address only)
ssh root@YOUR_PRODUCTION_IP

Expected output: SSH login prompt asking for root password.

If something goes wrong:

  • Verify the IP addresses are correct
  • Check that SSH is enabled on the Linodes
  • Ensure your local machine can reach the Linodes (no firewall blocking)

0.3 Choose Your Names

Before proceeding, decide on:

  1. Service Account Name: Choose a username for the service account (e.g., appuser, deploy, service)

    • Replace SERVICE_USER in this guide with your chosen name
    • This account runs the actual application services
  2. Deployment User Name: Choose a username for deployment tasks (e.g., deploy, ci, admin)

    • Replace DEPLOY_USER in this guide with your chosen name
    • This account has sudo privileges for deployment tasks
  3. Application Name: Choose a name for your application (e.g., myapp, webapp, api)

    • Replace APP_NAME in this guide with your chosen name
  4. Domain Name (Optional): If you have a domain, note it for SSL configuration

    • Replace your-domain.com in this guide with your actual domain

Example:

  • If you choose appuser as service account, deploy as deployment user, and myapp as application name:
    • Replace all SERVICE_USER with appuser
    • Replace all DEPLOY_USER with deploy
    • Replace all APP_NAME with myapp
    • If you have a domain example.com, replace your-domain.com with example.com

Security Model:

  • Service Account (SERVICE_USER): Runs application services, no sudo access
  • Deployment User (DEPLOY_USER): Handles deployments via SSH, has sudo access
  • Root: Only used for initial setup, then disabled for SSH access

0.4 Set Up SSH Key Authentication for Local Development

Important: This step should be done on both Linodes to enable secure SSH access from your local development machine.

0.4.1 Generate SSH Key on Your Local Machine

On your local development machine, generate an SSH key pair:

# Generate SSH key pair (if you don't already have one)
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""

# Or use existing key if you have one
ls ~/.ssh/id_ed25519.pub
0.4.2 Add Your Public Key to Both Linodes

Copy your public key to both Linodes:

# Copy your public key to CI/CD Linode
ssh-copy-id root@YOUR_CI_CD_IP

# Copy your public key to Production Linode
ssh-copy-id root@YOUR_PRODUCTION_IP

Alternative method (if ssh-copy-id doesn't work):

# Copy your public key content
cat ~/.ssh/id_ed25519.pub

# Then manually add to each server
ssh root@YOUR_CI_CD_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys

ssh root@YOUR_PRODUCTION_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
0.4.3 Test SSH Key Authentication

Test that you can access both servers without passwords:

# Test CI/CD Linode
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'

# Test Production Linode
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'

Expected output: The echo messages should appear without password prompts.

0.4.4 Create Deployment Users

On both Linodes, create the deployment user with sudo privileges:

# Create deployment user
sudo useradd -m -s /bin/bash DEPLOY_USER
sudo usermod -aG sudo DEPLOY_USER

# Set a secure password (for emergency access only)
echo "DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd

# Copy your SSH key to the deployment user
sudo mkdir -p /home/DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/DEPLOY_USER/.ssh/
sudo chown -R DEPLOY_USER:DEPLOY_USER /home/DEPLOY_USER/.ssh
sudo chmod 700 /home/DEPLOY_USER/.ssh
sudo chmod 600 /home/DEPLOY_USER/.ssh/authorized_keys

# Configure sudo to use SSH key authentication (most secure)
echo "DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/DEPLOY_USER

Security Note: This configuration allows the DEPLOY_USER to use sudo without a password, which is more secure for CI/CD automation since there are no passwords to store or expose. The random password is set for emergency console access only.

0.4.5 Test Sudo Access

Test that the deployment user can use sudo without password prompts:

# Test sudo access
ssh DEPLOY_USER@YOUR_CI_CD_IP 'sudo whoami'
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'sudo whoami'

Expected output: Both commands should return root without prompting for a password.

0.4.6 Test Deployment User Access

Test that you can access both servers as the deployment user:

# Test CI/CD Linode
ssh DEPLOY_USER@YOUR_CI_CD_IP 'echo "Deployment user SSH access works for CI/CD"'

# Test Production Linode
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Deployment user SSH access works for Production"'

Expected output: The echo messages should appear without password prompts.

0.4.7 Create SSH Config for Easy Access

On your local machine, create an SSH config file for easy access:

# Create SSH config
cat > ~/.ssh/config << 'EOF'
Host ci-cd-dev
    HostName YOUR_CI_CD_IP
    User DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no

Host production-dev
    HostName YOUR_PRODUCTION_IP
    User DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no
EOF

chmod 600 ~/.ssh/config

Now you can access servers easily:

ssh ci-cd-dev
ssh production-dev

Part 1: CI/CD Linode Setup

Step 1: Initial System Setup

1.1 Update the System

sudo apt update && sudo apt upgrade -y

What this does: Updates package lists and upgrades all installed packages to their latest versions.

Expected output: A list of packages being updated, followed by completion messages.

1.2 Configure Timezone

# Configure timezone interactively
sudo dpkg-reconfigure tzdata

# Verify timezone setting
date

What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).

Expected output: After selecting your timezone, the date command should show the current date and time in your selected timezone.

1.3 Configure /etc/hosts

# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts

# Verify the configuration
cat /etc/hosts

What this does:

  • Adds localhost entries for both IPv4 and IPv6 addresses to /etc/hosts
  • Ensures proper localhost resolution for both IPv4 and IPv6

Important: Replace YOUR_CI_CD_IPV4_ADDRESS and YOUR_CI_CD_IPV6_ADDRESS with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.

Expected output: The /etc/hosts file should show entries for 127.0.0.1, ::1, and your Linode's actual IP addresses all mapping to localhost.

1.4 Install Essential Packages

sudo apt install -y \
    curl \
    wget \
    git \
    build-essential \
    pkg-config \
    libssl-dev \
    ca-certificates \
    apt-transport-https \
    software-properties-common \
    apache2-utils

What this does: Installs development tools, SSL libraries, and utilities needed for Docker and application building.

Step 2: Create Users

2.1 Create Service Account

# Create dedicated group for the service account
sudo groupadd -r SERVICE_USER

# Create service account user with dedicated group
sudo useradd -r -g SERVICE_USER -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd

2.2 Verify Users

sudo su - SERVICE_USER
whoami
pwd
exit

sudo su - DEPLOY_USER
whoami
pwd
exit

Step 3: Install Docker

3.1 Add Docker Repository

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update

3.2 Install Docker Packages

sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

3.3 Configure Docker for Service Account

sudo usermod -aG docker SERVICE_USER

Step 4: Set Up Docker Registry

4.1 Create Registry Directory

sudo mkdir -p /opt/registry
sudo chown SERVICE_USER:SERVICE_USER /opt/registry

4.2 Create Registry Configuration

# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER

cat > /opt/registry/config.yml << 'EOF'
version: 0.1
log:
  level: info
storage:
  filesystem:
    rootdirectory: /var/lib/registry
  delete:
    enabled: true
  cache:
    blobdescriptor: inmemory
http:
  addr: :5000
  tls:
    certificate: /etc/docker/registry/ssl/registry.crt
    key: /etc/docker/registry/ssl/registry.key
  headers:
    X-Content-Type-Options: [nosniff]
    X-Frame-Options: [DENY]
    X-XSS-Protection: [1; mode=block]
    Access-Control-Allow-Origin: ["*"]
    Access-Control-Allow-Methods: ["HEAD", "GET", "OPTIONS", "DELETE"]
    Access-Control-Allow-Headers: ["Authorization", "Content-Type", "Accept", "Accept-Encoding", "Accept-Language", "Cache-Control", "Connection", "DNT", "Pragma", "User-Agent"]
  # Public read access, authentication required for push
  auth:
    htpasswd:
      realm: basic-realm
      path: /etc/docker/registry/auth/auth.htpasswd
health:
  storagedriver:
    enabled: true
    interval: 10s
    threshold: 3
EOF

# Exit SERVICE_USER shell
exit

What this configuration does:

  • HTTPS Enabled: Uses TLS certificates for secure communication
  • Public Read Access: Anyone can pull images without authentication
  • Authenticated Push: Only authenticated users can push images
  • Security Headers: Protects against common web vulnerabilities
  • CORS Headers: Allows the registry UI to access the registry API with all necessary headers
  • No Secret Key: The secret field was unnecessary and has been removed

Security Note: We switch to SERVICE_USER because the registry directory is owned by SERVICE_USER, maintaining proper file ownership and security.

4.2.1 Generate SSL Certificates

# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER

# Create system SSL directory for registry certificates
sudo mkdir -p /etc/ssl/registry

# Get your actual IP address
YOUR_ACTUAL_IP=$(curl -4 -s ifconfig.me)
echo "Your IP address is: $YOUR_ACTUAL_IP"

# Generate self-signed certificate with actual IP in system directory
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/ssl/registry/registry.key -out /etc/ssl/registry/registry.crt -days 365 -nodes -subj "/C=US/ST=State/L=City/O=Organization/CN=$YOUR_ACTUAL_IP"

# Set proper permissions
sudo chmod 600 /etc/ssl/registry/registry.key
sudo chmod 644 /etc/ssl/registry/registry.crt

# Exit SERVICE_USER shell
exit

Important: The certificate is now generated in the system SSL directory /etc/ssl/registry/ with your actual CI/CD Linode IP address automatically.

4.3 Create Authentication File

# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER

# Create system auth directory for registry authentication
sudo mkdir -p /etc/registry/auth

# Create htpasswd file for authentication (required for push operations only)
sudo htpasswd -Bbn push-user "$(openssl rand -base64 32)" > /tmp/auth.htpasswd
sudo mv /tmp/auth.htpasswd /etc/registry/auth/auth.htpasswd

# Exit SERVICE_USER shell
exit

What this does: Creates user credentials for registry authentication in the system auth directory.

  • push-user: Can push images (used by CI/CD pipeline for deployments)

Note: Pull operations are public and don't require authentication, but push operations require these credentials.

4.3.1 Clone Repository for Registry Configuration

# Switch to DEPLOY_USER (who has sudo access)
sudo su - DEPLOY_USER

# Create application directory and clone repository
sudo mkdir -p /opt/APP_NAME
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME
cd /opt
sudo git clone https://your-forgejo-instance/your-username/APP_NAME.git
sudo chown -R SERVICE_USER:SERVICE_USER APP_NAME/

# Verify the registry folder exists
ls -la /opt/APP_NAME/registry/

# Exit DEPLOY_USER shell
exit

Important: Replace your-forgejo-instance, your-username, and APP_NAME with your actual Forgejo instance URL, username, and application name.

What this does:

  • DEPLOY_USER creates the directory structure and clones the repository
  • SERVICE_USER owns all the files for security
  • Registry configuration files are now available at /opt/APP_NAME/registry/

4.4 Create Docker Compose for Registry

# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER

# The registry configuration files are already available in the cloned repository
# at /opt/APP_NAME/registry/
# No file copying is needed - we'll use the files directly from the repository

# Exit SERVICE_USER shell
exit

Important: The repository should be cloned in the previous step (4.3.1) to /opt/APP_NAME/. The registry configuration files are used directly from the repository.

4.4.1 Update Configuration with Actual IP Address

# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER

cd /opt/APP_NAME/registry

# Get your actual IP address
YOUR_ACTUAL_IP=$(curl -4 -s ifconfig.me)
echo "Your IP address is: $YOUR_ACTUAL_IP"

# Replace placeholder IP addresses in configuration files
sed -i "s/YOUR_CI_CD_IP/$YOUR_ACTUAL_IP/g" docker-compose.yml
sed -i "s/YOUR_CI_CD_IP/$YOUR_ACTUAL_IP/g" nginx.conf

# Exit SERVICE_USER shell
exit

Important: This step replaces all instances of YOUR_CI_CD_IP with your actual CI/CD Linode IP address in both the docker-compose.yml and nginx.conf files in the repository.

4.5 Install Required Tools

# Install htpasswd utility
sudo apt install -y apache2-utils

4.6 Start Registry

# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER

cd /opt/APP_NAME/registry
docker compose up -d

# Exit SERVICE_USER shell
exit

4.7 Test Registry Setup

# Check if containers are running
cd /opt/APP_NAME/registry
docker compose ps

# Test registry API (HTTPS via nginx)
curl -k https://localhost:8080/v2/_catalog

# Test registry UI (HTTPS via nginx)
curl -I https://localhost:8080

# Test Docker push/pull (optional but recommended)
# Create a test image
echo "FROM alpine:latest" > /tmp/test.Dockerfile
echo "RUN echo 'Hello from test image'" >> /tmp/test.Dockerfile

# Build and tag test image
docker build -f /tmp/test.Dockerfile -t localhost:8080/test:latest /tmp

# Push to registry
docker push localhost:8080/test:latest

# Verify image is in registry
curl -k https://localhost:8080/v2/_catalog
curl -k https://localhost:8080/v2/test/tags/list

# Pull image back (verifies pull works)
docker rmi localhost:8080/test:latest
docker pull localhost:8080/test:latest

# Clean up test image completely
# Remove from local Docker
docker rmi localhost:8080/test:latest

# Clean up test file
rm /tmp/test.Dockerfile

# Get the manifest digest for the 'latest' tag
curl -k -H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
  https://localhost:8080/v2/test/manifests/latest

# Copy the "config.digest" value from the output above (starts with "sha256:")
# Then delete the tag using that digest:
curl -k -X DELETE https://localhost:8080/v2/test/manifests/<digest>

# Run garbage collection to remove orphaned blobs
docker compose exec registry /bin/registry garbage-collect /etc/docker/registry/config.yml --delete-untagged

# Remove the repository directory structure
docker compose exec registry rm -rf /var/lib/registry/docker/registry/v2/repositories/test

# Verify registry is empty
echo "Verifying registry is now empty..."
curl -k https://localhost:8080/v2/_catalog

# Exit SERVICE_USER shell
exit

Important Notes:

  • Registry API: Uses HTTPS on port 5000 (secure)
  • Registry UI: Uses HTTPS on port 8080 (secure, via nginx reverse proxy)
  • Access URLs:
    • Registry UI: https://YOUR_CI_CD_IP:8080 (use HTTPS)
    • Registry API: https://YOUR_CI_CD_IP:5000
  • Browser Access: Both services now use HTTPS for secure communication

Expected Output:

  • docker-compose ps should show both registry and registry-ui as "Up"
  • curl -k https://localhost:5000/v2/_catalog should return {"repositories":[]} (empty initially)
  • curl -I https://localhost:8080 should return HTTP 200
  • Push/pull test should complete successfully

If something goes wrong:

  • Check container logs: docker compose logs
  • Verify ports are open: netstat -tlnp | grep :5000
  • Check Docker daemon config: cat /etc/docker/daemon.json
  • Restart registry: docker compose restart

Step 5: Configure Docker for Registry Access

5.1 Configure Docker for Registry Access

# Get the push user credentials
PUSH_USER="push-user"
PUSH_PASSWORD=$(grep push-user /etc/registry/auth/auth.htpasswd | cut -d: -f2)

# Copy the certificate to Docker's trusted certificates
sudo cp /opt/registry/ssl/registry.crt /usr/local/share/ca-certificates/registry.crt
sudo update-ca-certificates

sudo tee /etc/docker/daemon.json << EOF
{
  "insecure-registries": ["YOUR_CI_CD_IP:8080"],
  "registry-mirrors": [],
  "auths": {
    "YOUR_CI_CD_IP:8080": {
      "auth": "$(echo -n "${PUSH_USER}:${PUSH_PASSWORD}" | base64)"
    }
  }
}
EOF

Important: Replace YOUR_CI_CD_IP with your actual CI/CD Linode IP address.

5.2 Restart Docker

sudo systemctl restart docker

Public Registry Access Model

Your registry is now configured with the following access model:

Public Read Access

Anyone can pull images without authentication:

# From any machine (public access)
docker pull YOUR_CI_CD_IP:5000/APP_NAME/backend:latest
docker pull YOUR_CI_CD_IP:5000/APP_NAME/frontend:latest

Authenticated Write Access

Only the CI/CD Linode can push images (using credentials):

# From CI/CD Linode only (authenticated)
docker push YOUR_CI_CD_IP:5000/APP_NAME/backend:latest
docker push YOUR_CI_CD_IP:5000/APP_NAME/frontend:latest

Registry UI Access

Public web interface for browsing images:

https://YOUR_CI_CD_IP:8080

Client Configuration

For other machines to pull images, they only need:

# Add to /etc/docker/daemon.json on client machines
{
  "insecure-registries": ["YOUR_CI_CD_IP:5000"]
}
# No authentication needed for pulls

Step 6: Set Up SSH for Production Communication

6.1 Generate SSH Key Pair

ssh-keygen -t ed25519 -C "ci-cd-server" -f ~/.ssh/id_ed25519 -N ""

6.2 Create SSH Config

cat > ~/.ssh/config << 'EOF'
Host production
    HostName YOUR_PRODUCTION_IP
    User DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null
EOF

chmod 600 ~/.ssh/config

Step 7: Install Forgejo Actions Runner

7.1 Download Runner

cd ~
wget https://code.forgejo.org/forgejo/runner/releases/download/v0.2.11/forgejo-runner-0.2.11-linux-amd64
chmod +x forgejo-runner-0.2.11-linux-amd64
sudo mv forgejo-runner-0.2.11-linux-amd64 /usr/local/bin/forgejo-runner

7.2 Create Systemd Service

sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
[Unit]
Description=Forgejo Actions Runner
After=network.target

[Service]
Type=simple
User=SERVICE_USER
WorkingDirectory=/home/SERVICE_USER
ExecStart=/usr/local/bin/forgejo-runner daemon
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

7.3 Enable Service

sudo systemctl daemon-reload
sudo systemctl enable forgejo-runner.service

7.4 Test Runner Configuration

# Check if the runner is running
sudo systemctl status forgejo-runner.service

# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager

# Test runner connectivity (in a separate terminal)
forgejo-runner list

# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-cd-runner" with status "Online"

Expected Output:

  • systemctl status should show "active (running)"
  • forgejo-runner list should show your runner
  • Forgejo web interface should show the runner as online

If something goes wrong:

  • Check logs: sudo journalctl -u forgejo-runner.service -f
  • Verify token: Make sure the registration token is correct
  • Check network: Ensure the runner can reach your Forgejo instance
  • Restart service: sudo systemctl restart forgejo-runner.service

Step 8: Set Up Monitoring and Cleanup

8.1 Monitoring Script

Important: The repository includes a pre-configured monitoring script in the scripts/ directory that can be used for both CI/CD and production monitoring.

Repository Script:

  • scripts/monitor.sh - Comprehensive monitoring script with support for both CI/CD and production environments

To use the repository monitoring script:

# Clone the repository if not already done
git clone https://your-forgejo-instance/your-username/APP_NAME.git /tmp/monitoring-setup
cd /tmp/monitoring-setup

# Make the script executable
chmod +x scripts/monitor.sh

# Test CI/CD monitoring
./scripts/monitor.sh --type ci-cd

# Test production monitoring (if you have a production setup)
./scripts/monitor.sh --type production

# Clean up
cd /
rm -rf /tmp/monitoring-setup

Alternative: Create a local copy for convenience:

# Copy the script to your home directory for easy access
cp /tmp/monitoring-setup/scripts/monitor.sh ~/monitor.sh
chmod +x ~/monitor.sh

# Test the local copy
~/monitor.sh --type ci-cd

Note: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.

8.2 Cleanup Script

Important: The repository includes a pre-configured cleanup script in the scripts/ directory that can be used for both CI/CD and production cleanup operations.

Repository Script:

  • scripts/cleanup.sh - Comprehensive cleanup script with support for both CI/CD and production environments

To use the repository cleanup script:

# Clone the repository if not already done
git clone https://your-forgejo-instance/your-username/APP_NAME.git /tmp/cleanup-setup
cd /tmp/cleanup-setup

# Make the script executable
chmod +x scripts/cleanup.sh

# Test CI/CD cleanup (dry run first)
./scripts/cleanup.sh --type ci-cd --dry-run

# Run CI/CD cleanup
./scripts/cleanup.sh --type ci-cd

# Test production cleanup (dry run first)
./scripts/cleanup.sh --type production --dry-run

# Clean up
cd /
rm -rf /tmp/cleanup-setup

Alternative: Create a local copy for convenience:

# Copy the script to your home directory for easy access
cp /tmp/cleanup-setup/scripts/cleanup.sh ~/cleanup.sh
chmod +x ~/cleanup.sh

# Test the local copy (dry run)
~/cleanup.sh --type ci-cd --dry-run

Note: The repository script is more comprehensive and includes proper error handling, colored output, dry-run mode, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate cleanup operations.

8.3 Test Cleanup Script

# Create some test images to clean up
docker pull alpine:latest
docker pull nginx:latest
docker tag alpine:latest test-cleanup:latest
docker tag nginx:latest test-cleanup2:latest

# Test cleanup with dry run first
./scripts/cleanup.sh --type ci-cd --dry-run

# Run the cleanup script
./scripts/cleanup.sh --type ci-cd

# Verify cleanup worked
echo "Checking remaining images:"
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"

echo "Checking remaining volumes:"
docker volume ls

echo "Checking remaining networks:"
docker network ls

Expected Output:

  • Cleanup script should run without errors
  • Test images should be removed
  • System should report cleanup completion
  • Remaining images should be minimal (only actively used ones)

If something goes wrong:

  • Check script permissions: ls -la scripts/cleanup.sh
  • Verify Docker access: docker ps
  • Check registry access: cd /opt/registry && docker compose ps
  • Run manually: bash -x scripts/cleanup.sh

8.4 Set Up Automated Cleanup

# Create a cron job to run cleanup daily at 3 AM using the repository script
(crontab -l 2>/dev/null; echo "0 3 * * * cd /tmp/cleanup-setup && ./scripts/cleanup.sh --type ci-cd >> /tmp/cleanup.log 2>&1") | crontab -

# Verify the cron job was added
crontab -l

What this does:

  • Runs automatically: The cleanup script runs every day at 3:00 AM
  • Frequency: Daily cleanup to prevent disk space issues
  • Logging: All cleanup output is logged to /tmp/cleanup.log
  • What it cleans: Unused Docker images, volumes, networks, and registry images

Alternative: Use a local copy for automated cleanup:

# If you created a local copy, use that instead
(crontab -l 2>/dev/null; echo "0 3 * * * ~/cleanup.sh --type ci-cd >> ~/cleanup.log 2>&1") | crontab -

Step 9: Configure Firewall

sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 5000/tcp  # Docker registry (public read access)
sudo ufw allow 8080/tcp  # Registry UI (public read access)

Security Model:

  • Port 5000 (Registry): Public read access, authenticated write access
  • Port 8080 (UI): Public read access for browsing images
  • SSH: Restricted to your IP addresses
  • All other ports: Blocked

Step 10: Test CI/CD Setup

10.1 Test Docker Installation

docker --version
docker compose --version

10.2 Check Registry Status

cd /opt/registry
docker compose ps

10.3 Test Registry Access

curl http://localhost:5000/v2/_catalog

10.4 Get Public Key for Production Server

cat ~/.ssh/id_ed25519.pub