sharenet/CI_CD_PIPELINE_SETUP_GUIDE.md

58 KiB

CI/CD Pipeline Setup Guide

This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments using Docker-in-Docker (DinD) for isolated CI operations.

Architecture Overview

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Forgejo Host  │    │   CI/CD Linode  │    │ Production Linode│
│   (Repository)  │    │ (Actions Runner)│    │ (Docker Deploy) │
│                 │    │ + Harbor Registry│   │                 │
│                 │    │ + DinD Container│    │                 │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         │                       │                       │
         └─────────── Push ──────┼───────────────────────┘
                                 │
                                 └─── Deploy ────────────┘

Pipeline Flow

  1. Code Push: Developer pushes code to Forgejo repository
  2. Automated Testing: CI/CD Linode runs tests in isolated DinD environment
  3. Image Building: If tests pass, Docker images are built within DinD
  4. Registry Push: Images are pushed to Harbor registry from DinD
  5. Production Deployment: Production Linode pulls images and deploys
  6. Health Check: Application is verified and accessible

Key Benefits of DinD Approach

For Rust Testing:

  • Fresh environment every test run
  • Parallel execution capability
  • Isolated dependencies - no test pollution
  • Fast cleanup - just restart DinD container

For CI/CD Operations:

  • Zero resource contention with Harbor
  • Simple cleanup - one-line container restart
  • Perfect isolation - CI/CD can't affect Harbor
  • Consistent environment - same setup every time

For Maintenance:

  • Reduced complexity - no complex cleanup scripts
  • Easy debugging - isolated environment
  • Reliable operations - no interference between services

Prerequisites

  • Two Ubuntu 24.04 LTS Linodes with root access
  • Basic familiarity with Linux commands and SSH
  • Forgejo repository with Actions enabled
  • Optional: Domain name for Production Linode (for SSL/TLS)

Quick Start

  1. Set up CI/CD Linode (Steps 1-9)
  2. Set up Production Linode (Steps 10-16)
  3. Configure SSH key exchange (Step 14)
  4. Set up Forgejo repository secrets (Step 18)
  5. Test the complete pipeline (Step 19)

What's Included

CI/CD Linode Features

  • Forgejo Actions runner for automated builds
  • Docker-in-Docker (DinD) container for isolated CI operations
  • Harbor container registry for image storage
  • Harbor web UI for image management
  • Built-in vulnerability scanning with Trivy
  • Role-based access control and audit logs
  • Secure SSH communication with production
  • Simplified cleanup - just restart DinD container

Production Linode Features

  • Docker-based application deployment
  • Optional SSL/TLS certificate management (if domain is provided)
  • Nginx reverse proxy with security headers
  • Automated backups and monitoring
  • Firewall and fail2ban protection

Pipeline Features

  • Automated testing on every code push in isolated environment
  • Automated image building and registry push from DinD
  • Automated deployment to production
  • Rollback capability with image versioning
  • Health monitoring and logging
  • Zero resource contention between CI/CD and Harbor

Security Model and User Separation

This setup uses a principle of least privilege approach with separate users for different purposes:

User Roles

  1. Root User

    • Purpose: Initial system setup only
    • SSH Access: Disabled after setup
    • Privileges: Full system access (used only during initial configuration)
  2. Deployment User (CI_DEPLOY_USER on CI Linode, PROD_DEPLOY_USER on Production Linode)

    • Purpose: SSH access, deployment tasks, system administration
    • SSH Access: Enabled with key-based authentication
    • Privileges: Sudo access for deployment and administrative tasks
    • Example: ci-deploy / prod-deploy
  3. Service Account (CI_SERVICE_USER on CI Linode, PROD_SERVICE_USER on Production Linode)

    • Purpose: Running application services (Docker containers, databases)
    • SSH Access: None (no login shell)
    • Privileges: No sudo access, minimal system access
    • Example: ci-service / prod-service

Security Benefits

  • No root SSH access: Eliminates the most common attack vector
  • Principle of least privilege: Each user has only the access they need
  • Separation of concerns: Deployment tasks vs. service execution are separate
  • Audit trail: Clear distinction between deployment and service activities
  • Reduced attack surface: Service account has minimal privileges

File Permissions

  • Application files: Owned by CI_SERVICE_USER for security (CI Linode) / PROD_SERVICE_USER for security (Production Linode)
  • Docker operations: Run by CI_SERVICE_USER with Docker group access (CI Linode) / PROD_SERVICE_USER with Docker group access (Production Linode)
  • Service execution: Run by CI_SERVICE_USER (no sudo needed) / PROD_SERVICE_USER (no sudo needed)

Prerequisites and Initial Setup

What's Already Done (Assumptions)

This guide assumes you have already:

  1. Created two Ubuntu 24.04 LTS Linodes with root access
  2. Set root passwords for both Linodes
  3. Have SSH client installed on your local machine
  4. Have Forgejo repository with Actions enabled
  5. Optional: Domain name pointing to Production Linode's IP addresses

Step 0: Initial SSH Access and Verification

Before proceeding with the setup, you need to establish initial SSH access to both Linodes.

0.1 Get Your Linode IP Addresses

From your Linode dashboard, note the IP addresses for:

  • CI/CD Linode: YOUR_CI_CD_IP (IP address only, no domain needed)
  • Production Linode: YOUR_PRODUCTION_IP (IP address for SSH, domain for web access)

0.2 Test Initial SSH Access

Test SSH access to both Linodes:

# Test CI/CD Linode (IP address only)
ssh root@YOUR_CI_CD_IP

# Test Production Linode (IP address only)
ssh root@YOUR_PRODUCTION_IP

Expected output: SSH login prompt asking for root password.

If something goes wrong:

  • Verify the IP addresses are correct
  • Check that SSH is enabled on the Linodes
  • Ensure your local machine can reach the Linodes (no firewall blocking)

0.3 Choose Your Names

Before proceeding, decide on:

  1. CI Service Account Name: Choose a username for the CI service account (e.g., ci-service)

    • Replace CI_SERVICE_USER in this guide with your chosen name
    • This account runs the CI pipeline and Docker operations on the CI Linode
  2. CI Deployment User Name: Choose a username for CI deployment tasks (e.g., ci-deploy)

    • Replace CI_DEPLOY_USER in this guide with your chosen name
    • This account has sudo privileges for deployment tasks
  3. Application Name: Choose a name for your application (e.g., sharenet)

    • Replace APP_NAME in this guide with your chosen name
  4. Domain Name (Optional): If you have a domain, note it for SSL configuration

    • Replace your-domain.com in this guide with your actual domain

Example:

  • If you choose ci-service as CI service account, ci-deploy as CI deployment user, and sharenet as application name:
    • Replace all CI_SERVICE_USER with ci-service
    • Replace all CI_DEPLOY_USER with ci-deploy
    • Replace all APP_NAME with sharenet
    • If you have a domain example.com, replace your-domain.com with example.com

Security Model:

  • CI Service Account (CI_SERVICE_USER): Runs CI pipeline and Docker operations, no sudo access
  • CI Deployment User (CI_DEPLOY_USER): Handles SSH communication and orchestration, has sudo access
  • Root: Only used for initial setup, then disabled for SSH access

0.4 Set Up SSH Key Authentication for Local Development

Important: This step should be done on both Linodes to enable secure SSH access from your local development machine.

0.4.1 Generate SSH Key on Your Local Machine

On your local development machine, generate an SSH key pair:

# Generate SSH key pair (if you don't already have one)
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""

# Or use existing key if you have one
ls ~/.ssh/id_ed25519.pub
0.4.2 Add Your Public Key to Both Linodes

Copy your public key to both Linodes:

# Copy your public key to CI/CD Linode
ssh-copy-id root@YOUR_CI_CD_IP

# Copy your public key to Production Linode
ssh-copy-id root@YOUR_PRODUCTION_IP

Alternative method (if ssh-copy-id doesn't work):

# Copy your public key content
cat ~/.ssh/id_ed25519.pub

# Then manually add to each server
ssh root@YOUR_CI_CD_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys

ssh root@YOUR_PRODUCTION_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
0.4.3 Test SSH Key Authentication

Test that you can access both servers without passwords:

# Test CI/CD Linode
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'

# Test Production Linode
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'

Expected output: The echo messages should appear without password prompts.

0.4.4 Create Deployment Users

On both Linodes, create the deployment user with sudo privileges:

For CI Linode:

# Create CI deployment user
sudo useradd -m -s /bin/bash CI_DEPLOY_USER
sudo usermod -aG sudo CI_DEPLOY_USER

# Set a secure password (for emergency access only)
echo "CI_DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd

# Copy your SSH key to the CI deployment user
sudo mkdir -p /home/CI_DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/CI_DEPLOY_USER/.ssh/
sudo chown -R CI_DEPLOY_USER:CI_DEPLOY_USER /home/CI_DEPLOY_USER/.ssh
sudo chmod 700 /home/CI_DEPLOY_USER/.ssh
sudo chmod 600 /home/CI_DEPLOY_USER/.ssh/authorized_keys

# Configure sudo to use SSH key authentication (most secure)
echo "CI_DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/CI_DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/CI_DEPLOY_USER

For Production Linode:

# Create production deployment user
sudo useradd -m -s /bin/bash PROD_DEPLOY_USER
sudo usermod -aG sudo PROD_DEPLOY_USER

# Set a secure password (for emergency access only)
echo "PROD_DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd

# Copy your SSH key to the production deployment user
sudo mkdir -p /home/PROD_DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/PROD_DEPLOY_USER/.ssh/
sudo chown -R PROD_DEPLOY_USER:PROD_DEPLOY_USER /home/PROD_DEPLOY_USER/.ssh
sudo chmod 700 /home/PROD_DEPLOY_USER/.ssh
sudo chmod 600 /home/PROD_DEPLOY_USER/.ssh/authorized_keys

# Configure sudo to use SSH key authentication (most secure)
echo "PROD_DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/PROD_DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/PROD_DEPLOY_USER

Security Note: This configuration allows the deployment users to use sudo without a password, which is more secure for CI/CD automation since there are no passwords to store or expose. The random password is set for emergency console access only.

0.4.5 Test Sudo Access

Test that the deployment users can use sudo without password prompts:

# Test CI deployment user sudo access
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'sudo whoami'

# Test production deployment user sudo access
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'sudo whoami'

Expected output: Both commands should return root without prompting for a password.

0.4.6 Test Deployment User Access

Test that you can access both servers as the deployment users:

# Test CI/CD Linode
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'echo "CI deployment user SSH access works for CI/CD"'

# Test Production Linode
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Production deployment user SSH access works for Production"'

Expected output: The echo messages should appear without password prompts.

0.4.7 Create SSH Config for Easy Access

On your local machine, create an SSH config file for easy access:

# Create SSH config
cat > ~/.ssh/config << 'EOF'
Host ci-cd-dev
    HostName YOUR_CI_CD_IP
    User CI_DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no

Host production-dev
    HostName YOUR_PRODUCTION_IP
    User PROD_DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no
EOF

chmod 600 ~/.ssh/config

Now you can access servers easily:

ssh ci-cd-dev
ssh production-dev

Part 1: CI/CD Linode Setup

Step 1: Initial System Setup

1.1 Update the System

sudo apt update && sudo apt upgrade -y

What this does: Updates package lists and upgrades all installed packages to their latest versions.

Expected output: A list of packages being updated, followed by completion messages.

1.2 Configure Timezone

# Configure timezone interactively
sudo dpkg-reconfigure tzdata

# Verify timezone setting
date

What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).

Expected output: After selecting your timezone, the date command should show the current date and time in your selected timezone.

1.3 Configure /etc/hosts

# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts

# Verify the configuration
cat /etc/hosts

What this does:

  • Adds localhost entries for both IPv4 and IPv6 addresses to /etc/hosts
  • Ensures proper localhost resolution for both IPv4 and IPv6

Important: Replace YOUR_CI_CD_IPV4_ADDRESS and YOUR_CI_CD_IPV6_ADDRESS with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.

Expected output: The /etc/hosts file should show entries for 127.0.0.1, ::1, and your Linode's actual IP addresses all mapping to localhost.

1.4 Install Essential Packages

sudo apt install -y \
    curl \
    wget \
    git \
    build-essential \
    pkg-config \
    libssl-dev \
    ca-certificates \
    apt-transport-https \
    software-properties-common \
    apache2-utils

What this does: Installs development tools, SSL libraries, and utilities needed for Docker and application building.

Step 2: Create Users

2.1 Create CI Service Account

# Create dedicated group for the CI service account
sudo groupadd -r CI_SERVICE_USER

# Create CI service account user with dedicated group
sudo useradd -r -g CI_SERVICE_USER -s /bin/bash -m -d /home/CI_SERVICE_USER CI_SERVICE_USER
echo "CI_SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd

2.2 Verify Users

sudo su - CI_SERVICE_USER
whoami
pwd
exit

sudo su - CI_DEPLOY_USER
whoami
pwd
exit

Step 3: Clone Repository for Registry Configuration

3.1 Clone Repository

# Switch to CI_DEPLOY_USER (who has sudo access)
sudo su - CI_DEPLOY_USER

# Create application directory and clone repository
sudo mkdir -p /opt/APP_NAME
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/APP_NAME
cd /opt
sudo git clone https://your-forgejo-instance/your-username/APP_NAME.git
sudo chown -R CI_SERVICE_USER:CI_SERVICE_USER APP_NAME/

# Verify the registry folder exists
ls -la /opt/APP_NAME/registry/

Important: Replace your-forgejo-instance, your-username, and APP_NAME with your actual Forgejo instance URL, username, and application name.

What this does:

  • CI_DEPLOY_USER creates the directory structure and clones the repository
  • CI_SERVICE_USER owns all the files for security
  • Registry configuration files are now available at /opt/APP_NAME/registry/

Step 4: Install Docker

4.1 Add Docker Repository

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update

4.2 Install Docker Packages

sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

4.3 Configure Docker for CI Service Account

sudo usermod -aG docker CI_SERVICE_USER

Step 5: Set Up Harbor Container Registry

Harbor provides a secure, enterprise-grade container registry with vulnerability scanning, role-based access control, and audit logging.

5.1 Create Harbor Service User

# Create dedicated user and group for Harbor
sudo groupadd -r harbor
sudo useradd -r -g harbor -s /bin/bash -m -d /opt/harbor harbor

# Set secure password for emergency access
echo "harbor:$(openssl rand -base64 32)" | sudo chpasswd

# Add harbor user to docker group
sudo usermod -aG docker harbor

# Add CI_DEPLOY_USER to harbor group for monitoring access
sudo usermod -aG harbor CI_DEPLOY_USER

# Set proper permissions on /opt/harbor directory
sudo chown harbor:harbor /opt/harbor
sudo chmod 755 /opt/harbor

5.2 Generate SSL Certificates

# Create system SSL directory for Harbor certificates
sudo mkdir -p /etc/ssl/registry

# Get your actual IP address
YOUR_ACTUAL_IP=$(curl -4 -s ifconfig.me)
echo "Your IP address is: $YOUR_ACTUAL_IP"

# Create OpenSSL configuration file with proper SANs
sudo tee /etc/ssl/registry/openssl.conf << EOF
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no

[req_distinguished_name]
C = US
ST = State
L = City
O = Organization
CN = $YOUR_ACTUAL_IP

[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
IP.1 = $YOUR_ACTUAL_IP
DNS.1 = $YOUR_ACTUAL_IP
DNS.2 = localhost
EOF

# Generate self-signed certificate with proper SANs
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/ssl/registry/registry.key -out /etc/ssl/registry/registry.crt -days 365 -nodes -extensions v3_req -config /etc/ssl/registry/openssl.conf

# Set proper permissions for harbor user
sudo chown harbor:harbor /etc/ssl/registry/registry.key
sudo chown harbor:harbor /etc/ssl/registry/registry.crt
sudo chmod 600 /etc/ssl/registry/registry.key
sudo chmod 644 /etc/ssl/registry/registry.crt
sudo chmod 644 /etc/ssl/registry/openssl.conf

5.3 Configure Docker to Trust Harbor Registry

# Add the certificate to system CA certificates
sudo cp /etc/ssl/registry/registry.crt /usr/local/share/ca-certificates/registry.crt
sudo update-ca-certificates

# Restart Docker to ensure it picks up the new CA certificates
sudo systemctl restart docker

5.4 Install Harbor

# Switch to harbor user
sudo su - harbor

# Set DB_PASSWORD environment variable
export DB_PASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-25)

# IMPORTANT: Save the DB_PASSWORD in your password manager for safekeeping
echo "DB_PASSWORD: $DB_PASSWORD"

# Download and install Harbor
cd /opt/harbor

# Switch to the CI_DEPLOY_USER
sudo su - CI_DEPLOY_USER

sudo wget https://github.com/goharbor/harbor/releases/download/v2.10.0/harbor-offline-installer-v2.10.0.tgz
sudo tar -xzf harbor-offline-installer-v2.10.0.tgz
cd harbor
sudo cp harbor.yml.tmpl harbor.yml

# Edit harbor.yml configuration
sudo nano harbor.yml

Important: In the harbor.yml file, update:

  • hostname: YOUR_CI_CD_IP (replace with your actual IP)
  • certificate: /etc/ssl/registry/registry.crt
  • private_key: /etc/ssl/registry/registry.key
  • password: <the DB_PASSWORD generated above>

Note: The default Harbor admin password is "Harbor12345" and will be changed in Step 5.6

# Run the following as the CI_DEPLOY_USER
sudo su - CI_DEPLOY_USER

cd /opt/harbor/harbor

# Install Harbor with Trivy vulnerability scanner
sudo ./prepare
sudo ./install.sh --with-trivy
sudo docker compose down
sudo chown -R harbor:harbor harbor

# Switch to the harbor user
sudo su - harbor

cd /opt/harbor/harbor

# Run the following to patially adjust the permissions correctly for the harbor user
./install.sh --with-trivy

# Exit harbor user shell to switch back to the CI_DEPLOY_USER
exit

cd /opt/harbor/harbor

# Run the following to adjust the permissions for various en files
sudo chown harbor:harbor common/config/jobservice/env
sudo chown harbor:harbor common/config/db/env
sudo chown harbor:harbor common/config/registryctl/env
sudo chown harbor:harbor common/config/trivy-adapter/env
sudo chown harbor:harbor common/config/core/env

# Switch back to harbor user and bring Harbor back up
sudo su - harbor
cd /opt/harbor/harbor
docker compose up -d

# Verify that all Harbor containers are healthy
docker compose ps -a

# Verify using the Harbor API that all Harbor processes are healthy
curl -k https://localhost/api/v2.0/health

5.5 Create Systemd Service

# Create systemd service file for Harbor
sudo tee /etc/systemd/system/harbor.service << EOF
[Unit]
Description=Harbor Container Registry
After=docker.service
Requires=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
User=harbor
Group=harbor
WorkingDirectory=/opt/harbor/harbor
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
ExecReload=/usr/bin/docker compose down && /usr/bin/docker compose up -d

[Install]
WantedBy=multi-user.target
EOF

# Enable and start Harbor service
sudo systemctl daemon-reload
sudo systemctl enable harbor.service
sudo systemctl start harbor.service

# Monitor startup (can take 2-3 minutes)
sudo journalctl -u harbor.service -f

5.6 Configure Harbor Access

  1. Access Harbor Web UI: Open https://YOUR_CI_CD_IP in your browser
  2. Login: Username admin, Password Harbor12345
  3. Change admin password: Click admin icon → Change Password
  4. Create project: Projects → New Project → Name: APP_NAME, Access Level: Public
  5. Create CI user: Administration → Users → New User → Username: ci-user, Password: your-secure-password
  6. Assign role: Projects → APP_NAME → Members → + User → Select ci-user, Role: Developer

5.7 Test Harbor Setup

# Switch to CI_SERVICE_USER for testing (CI_SERVICE_USER runs CI pipeline and Docker operations)
sudo su - CI_SERVICE_USER

# Test Docker login and push
echo "your-secure-password" | docker login YOUR_CI_CD_IP -u ci-user --password-stdin

# Create and push test image
echo "FROM alpine:latest" > /tmp/test.Dockerfile
docker build -f /tmp/test.Dockerfile -t YOUR_CI_CD_IP/APP_NAME/test:latest /tmp
docker push YOUR_CI_CD_IP/APP_NAME/test:latest

# Test public pull (no authentication)
docker logout YOUR_CI_CD_IP
docker pull YOUR_CI_CD_IP/APP_NAME/test:latest

# Test that unauthorized push is blocked
echo "FROM alpine:latest" > /tmp/unauthorized.Dockerfile
docker build -f /tmp/unauthorized.Dockerfile -t YOUR_CI_CD_IP/APP_NAME/unauthorized:latest /tmp
docker push YOUR_CI_CD_IP/APP_NAME/unauthorized:latest
# Expected: This should fail with authentication error

# Clean up
docker rmi YOUR_CI_CD_IP/APP_NAME/test:latest
docker rmi YOUR_CI_CD_IP/APP_NAME/unauthorized:latest
exit

Expected behavior:

  • Push requires authentication
  • Pull works without authentication
  • Unauthorized push is blocked
  • Web UI accessible at https://YOUR_CI_CD_IP

Step 6: Install Forgejo Actions Runner

6.1 Download Runner

Important: Run this step as the CI_DEPLOY_USER (not root or CI_SERVICE_USER). The CI_DEPLOY_USER handles deployment tasks including downloading and installing the Forgejo runner.

cd ~

# Get the latest version dynamically
LATEST_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases | jq -r '.[0].tag_name')
echo "Downloading Forgejo runner version: $LATEST_VERSION"

# Download the latest runner
wget https://code.forgejo.org/forgejo/runner/releases/download/${LATEST_VERSION}/forgejo-runner-${LATEST_VERSION#v}-linux-amd64
chmod +x forgejo-runner-${LATEST_VERSION#v}-linux-amd64
sudo mv forgejo-runner-${LATEST_VERSION#v}-linux-amd64 /usr/bin/forgejo-runner

Alternative: Pin to Specific Version (Recommended for Production)

If you prefer to pin to a specific version for stability, replace the dynamic download with:

cd ~
VERSION="v6.3.1"  # Pin to specific version
wget https://code.forgejo.org/forgejo/runner/releases/download/${VERSION}/forgejo-runner-${VERSION#v}-linux-amd64
chmod +x forgejo-runner-${VERSION#v}-linux-amd64
sudo mv forgejo-runner-${VERSION#v}-linux-amd64 /usr/bin/forgejo-runner

What this does:

  • Dynamic approach: Downloads the latest stable Forgejo Actions runner
  • Version pinning: Allows you to specify a known-good version for production
  • System installation: Installs the binary system-wide in /usr/bin/ for proper Linux structure
  • Makes the binary executable and available system-wide

Production Recommendation: Use version pinning in production environments to ensure consistency and avoid unexpected breaking changes.

6.2 Register Runner

Important: The runner must be registered with your Forgejo instance before it can start. This creates the required .runner configuration file.

Step 1: Get Permissions to Create Repository-level Runners

To create a repository-level runner, you need Repository Admin or Owner permissions. Here's how to check and manage permissions:

Check Your Current Permissions:

  1. Go to your repository: https://your-forgejo-instance/your-username/your-repo
  2. Look for the Settings tab in the repository navigation
  3. If you see Actions in the left sidebar under Settings, you have the right permissions
  4. If you don't see Settings or Actions, you don't have admin access

Add Repository Admin (Repository Owner Only):

If you're the repository owner and need to give someone else admin access:

  1. Go to Repository Settings:

    • Navigate to your repository
    • Click Settings tab
    • Click Collaborators in the left sidebar
  2. Add Collaborator:

    • Click Add Collaborator button
    • Enter the username or email of the person you want to add
    • Select Admin from the role dropdown
    • Click Add Collaborator
  3. Alternative: Manage Team Access (for Organizations):

    • Go to Settings → Collaborators
    • Click Manage Team Access
    • Add the team with Admin permissions

Repository Roles and Permissions:

Role Can Create Runners Can Manage Repository Can Push Code
Owner Yes Yes Yes
Admin Yes Yes Yes
Write No No Yes
Read No No No

If You Don't Have Permissions:

Option 1: Ask Repository Owner

  • Contact the person who owns the repository
  • Ask them to create the runner and share the registration token with you

Option 2: Use Organization/User Runner

  • If you have access to organization settings, create an org-level runner
  • Or create a user-level runner if you own other repositories

Option 3: Site Admin Help

  • Contact your Forgejo instance administrator to create a site-level runner

Site Administrator: Setting Repository Admin (Forgejo Instance Admin)

To add an existing user as an Administrator of an existing repository in Forgejo, follow these steps:

  1. Go to the repository: Navigate to the main page of the repository you want to manage.
  2. Access repository settings: Click on the "Settings" tab under your repository name.
  3. Go to Collaborators & teams: In the sidebar, under the "Access" section, click on "Collaborators & teams".
  4. Manage access: Under "Manage access", locate the existing user you want to make an administrator.
  5. Change their role: Next to the user's name, select the "Role" dropdown menu and click on "Administrator".

Important Note: If the user is already the Owner of the repository, then they do not have to add themselves as an Administrator of the repository and indeed cannot. Repository owners automatically have all administrative permissions.

Important Notes for Site Administrators:

  • Repository Admin can manage the repository but cannot modify site-wide settings
  • Site Admin retains full control over the Forgejo instance
  • Changes take effect immediately for the user
  • Consider the security implications of granting admin access

Step 2: Get Registration Token

  1. Go to your Forgejo repository
  2. Navigate to Settings → Actions → Runners
  3. Click "New runner"
  4. Copy the registration token

Step 3: Register the Runner

# Switch to CI_DEPLOY_USER to register the runner
sudo su - CI_DEPLOY_USER

cd ~

# Register the runner with your Forgejo instance
forgejo-runner register \
  --instance https://your-forgejo-instance \
  --token YOUR_REGISTRATION_TOKEN \
  --name "ci-runner" \
  --labels "ci" \
  --no-interactive

Important: Replace your-forgejo-instance with your actual Forgejo instance URL and YOUR_REGISTRATION_TOKEN with the token you copied from Step 2. Also make sure it ends in a /.

Note: The your-forgejo-instance should be the base URL of your Forgejo instance (e.g., https://git.<your-domain>/), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.

What this does:

  • Creates the required .runner configuration file in the CI_DEPLOY_USER's home directory
  • Registers the runner with your Forgejo instance
  • Sets up the runner with appropriate labels for Ubuntu and Docker environments

Step 4: Set Up System Configuration

# Create system config directory for Forgejo runner
sudo mkdir -p /etc/forgejo-runner

# Copy the runner configuration to system location
sudo cp /home/CI_DEPLOY_USER/.runner /etc/forgejo-runner/.runner

# Set proper ownership and permissions
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /etc/forgejo-runner/.runner
sudo chmod 600 /etc/forgejo-runner/.runner

What this does:

  • Copies the configuration to the system location (/etc/forgejo-runner/.runner)
  • Sets proper ownership and permissions for CI_SERVICE_USER to access the config
  • Registers the runner with your Forgejo instance
  • Sets up the runner with appropriate labels for Ubuntu and Docker environments

Step 5: Create and Enable Systemd Service

sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
[Unit]
Description=Forgejo Actions Runner
After=network.target

[Service]
Type=simple
User=CI_SERVICE_USER
WorkingDirectory=/etc/forgejo-runner
ExecStart=/usr/bin/forgejo-runner daemon
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

# Enable the service
sudo systemctl daemon-reload
sudo systemctl enable forgejo-runner.service

What this does:

  • Creates the systemd service configuration for the Forgejo runner
  • Sets the working directory to /etc/forgejo-runner where the .runner file is located
  • Enables the service to start automatically on boot
  • Sets up proper restart behavior for reliability

6.3 Start Service

# Start the Forgejo runner service
sudo systemctl start forgejo-runner.service

# Verify the service is running
sudo systemctl status forgejo-runner.service

Expected Output: The service should show "active (running)" status.

What this does:

  • Starts the Forgejo runner daemon as a system service
  • The runner will now be available to accept and execute workflows from your Forgejo instance
  • The service will automatically restart if it crashes or the system reboots

6.4 Test Runner Configuration

# Check if the runner is running
sudo systemctl status forgejo-runner.service

# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager

# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-runner" with status "Online"

Expected Output:

  • systemctl status should show "active (running)"
  • Forgejo web interface should show the runner as online with "ci" label

If something goes wrong:

  • Check logs: sudo journalctl -u forgejo-runner.service -f
  • Verify token: Make sure the registration token is correct
  • Check network: Ensure the runner can reach your Forgejo instance
  • Restart service: sudo systemctl restart forgejo-runner.service

Step 7: Set Up Docker-in-Docker (DinD) for CI Operations

Important: This step sets up a Docker-in-Docker container that provides an isolated environment for CI/CD operations, eliminating resource contention with Harbor and simplifying cleanup.

7.1 Create Containerized CI/CD Environment

# Switch to CI_SERVICE_USER (who has Docker group access)
sudo su - CI_SERVICE_USER

# Navigate to the application directory
cd /opt/APP_NAME

# Start DinD container for isolated Docker operations
docker run -d \
  --name ci-dind \
  --privileged \
  -p 2375:2375 \
  -e DOCKER_TLS_CERTDIR="" \
  docker:dind

# Wait for a minute or two for DinD to be ready (wait for Docker daemon inside DinD)

# Test DinD connectivity
docker exec ci-dind docker version

What this does:

  • Creates isolated DinD environment: Provides isolated Docker environment for all CI/CD operations
  • Health checks: Ensures DinD is fully ready before proceeding
  • Simple setup: Direct Docker commands for maximum flexibility

Why CI_SERVICE_USER: The CI_SERVICE_USER is in the docker group and runs the CI pipeline, so it needs direct access to the DinD container for seamless CI/CD operations.

7.2 Configure DinD for Harbor Registry

# Navigate to the application directory
cd /opt/APP_NAME

# Copy Harbor certificate to DinD container
docker cp /etc/ssl/registry/registry.crt ci-dind:/usr/local/share/ca-certificates/

# Fix certificate ownership (crucial for CA certificate trust)
docker exec ci-dind chown root:root /usr/local/share/ca-certificates/registry.crt

# Update CA certificates
docker exec ci-dind update-ca-certificates

# Restart DinD container to pick up new CA certificates
docker restart ci-dind

# Wait for DinD to be ready again
sleep 30

# Login to Harbor from within DinD
echo "ci-user-password" | docker exec -i ci-dind docker login YOUR_CI_CD_IP -u ci-user --password-stdin

# Test Harbor connectivity from DinD (using certificate trust)
docker exec ci-dind docker pull alpine:latest
docker exec ci-dind docker tag alpine:latest YOUR_CI_CD_IP/APP_NAME/test:latest
docker exec ci-dind docker push YOUR_CI_CD_IP/APP_NAME/test:latest

# Clean up test image
docker exec ci-dind docker rmi YOUR_CI_CD_IP/APP_NAME/test:latest 

What this does:

  • Configures certificate trust: Properly sets up Harbor certificate trust in DinD
  • Fixes ownership issues: Ensures certificate has correct ownership for CA trust
  • Tests connectivity: Verifies DinD can pull, tag, and push images to Harbor
  • Validates setup: Ensures the complete CI/CD pipeline will work

7.3 CI/CD Workflow Architecture

The CI/CD pipeline uses a three-stage approach with dedicated environments for each stage:

Job 1 (Testing) - docker-compose.test.yml:

  • Purpose: Comprehensive testing with multiple containers
  • Environment: DinD with PostgreSQL, Rust, and Node.js containers
  • Services:
    • PostgreSQL database for backend tests
    • Rust toolchain for backend testing and migrations
    • Node.js toolchain for frontend testing
  • Network: All containers communicate through ci-cd-test-network
  • Setup: DinD container created, Harbor certificate installed, Docker login performed
  • Cleanup: Testing containers removed, DinD container kept running

Job 2 (Building) - Direct Docker Commands:

  • Purpose: Image building and pushing to Harbor
  • Environment: Same DinD container from Job 1
  • Process:
    • Uses Docker Buildx for efficient building
    • Builds backend and frontend images separately
    • Pushes images to Harbor registry
  • Harbor Access: Reuses Harbor authentication from Job 1
  • Cleanup: DinD container stopped and removed (clean slate for next run)

Job 3 (Deployment) - docker-compose.prod.yml:

  • Purpose: Production deployment with pre-built images
  • Environment: Production runner on Production Linode
  • Process:
    • Pulls images from Harbor registry
    • Deploys complete application stack
    • Verifies all services are healthy
  • Services: PostgreSQL, backend, frontend, Nginx

Key Benefits:

  • 🧹 Complete Isolation: Each job has its own dedicated environment
  • 🚫 No Resource Contention: Testing and building don't interfere with Harbor
  • Consistent Environment: Same setup every time
  • 🎯 Purpose-Specific: Each Docker Compose file serves a specific purpose
  • 🔄 Parallel Safety: Jobs can run safely in parallel

Testing DinD Setup:

# Test DinD functionality
docker exec ci-dind docker run --rm alpine:latest echo "DinD is working!"

# Test Harbor integration
docker exec ci-dind docker pull alpine:latest
docker exec ci-dind docker tag alpine:latest YOUR_CI_CD_IP/APP_NAME/dind-test:latest
docker exec ci-dind docker push YOUR_CI_CD_IP/APP_NAME/dind-test:latest

# Clean up test
docker exec ci-dind docker rmi YOUR_CI_CD_IP/APP_NAME/dind-test:latest

Expected Output:

  • DinD container should be running and accessible
  • Docker commands should work inside DinD
  • Harbor push/pull should work from DinD

7.4 Production Deployment Architecture

The production deployment uses a separate Docker Compose file (docker-compose.prod.yml) that pulls built images from the Harbor registry and deploys the complete application stack.

Production Stack Components:

  • PostgreSQL: Production database with persistent storage
  • Backend: Rust application built and pushed from CI/CD
  • Frontend: Next.js application built and pushed from CI/CD
  • Nginx: Reverse proxy with SSL termination

Deployment Flow:

  1. Production Runner: Runs on Production Linode with production label
  2. Image Pull: Pulls latest images from Harbor registry on CI Linode
  3. Stack Deployment: Uses docker-compose.prod.yml to deploy complete stack
  4. Health Verification: Ensures all services are healthy before completion

Key Benefits:

  • 🔄 Image Registry: Centralized image storage in Harbor
  • 📦 Consistent Deployment: Same images tested in CI are deployed to production
  • Fast Deployment: Only pulls changed images
  • 🛡️ Rollback Capability: Can easily rollback to previous image versions
  • 📊 Health Monitoring: Built-in health checks for all services

7.5 Monitoring Script

Important: The repository includes a pre-configured monitoring script in the scripts/ directory that can be used for both CI/CD and production monitoring.

Repository Script:

  • scripts/monitor.sh - Comprehensive monitoring script with support for both CI/CD and production environments

To use the repository monitoring script:

# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME

# Make the script executable
chmod +x scripts/monitor.sh

# Test CI/CD monitoring
./scripts/monitor.sh --type ci-cd

# Test production monitoring (if you have a production setup)
./scripts/monitor.sh --type production

Note: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.

Step 8: Configure Firewall

8.1 Configure UFW Firewall

sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 443/tcp  # Harbor registry (public read access)

Security Model:

  • Port 443 (Harbor): Public read access for public projects, authenticated write access
  • SSH: Restricted to your IP addresses
  • All other ports: Blocked

Step 9: Test CI/CD Setup

9.1 Test Docker Installation

docker --version
docker compose version

9.2 Check Harbor Status

cd /opt/harbor/harbor
docker compose ps

9.3 Test Harbor Access

# Test Harbor API
curl -k https://localhost/api/v2.0/health

# Test Harbor UI
curl -k -I https://localhost

Part 2: Production Linode Setup

Step 10: Initial System Setup

10.1 Update the System

sudo apt update && sudo apt upgrade -y

10.2 Configure Timezone

# Configure timezone interactively
sudo dpkg-reconfigure tzdata

# Verify timezone setting
date

What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).

Expected output: After selecting your timezone, the date command should show the current date and time in your selected timezone.

10.3 Configure /etc/hosts

# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts

# Verify the configuration
cat /etc/hosts

What this does:

  • Adds localhost entries for both IPv4 and IPv6 addresses to /etc/hosts
  • Ensures proper localhost resolution for both IPv4 and IPv6

Important: Replace YOUR_PRODUCTION_IPV4_ADDRESS and YOUR_PRODUCTION_IPV6_ADDRESS with the actual IPv4 and IPv6 addresses of your Production Linode obtained from your Linode dashboard.

Expected output: The /etc/hosts file should show entries for 127.0.0.1, ::1, and your Linode's actual IP addresses all mapping to localhost.

10.4 Install Essential Packages

sudo apt install -y \
    curl \
    wget \
    git \
    ca-certificates \
    apt-transport-https \
    software-properties-common \
    ufw \
    fail2ban \
    htop \
    nginx \
    certbot \
    python3-certbot-nginx

Step 11: Create Users

11.1 Create the PROD_SERVICE_USER User

# Create dedicated group for the production service account
sudo groupadd -r PROD_SERVICE_USER

# Create production service account user with dedicated group
sudo useradd -r -g PROD_SERVICE_USER -s /bin/bash -m -d /home/PROD_SERVICE_USER PROD_SERVICE_USER
echo "PROD_SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd

11.2 Verify Users

sudo su - PROD_SERVICE_USER
whoami
pwd
exit

sudo su - PROD_DEPLOY_USER
whoami
pwd
exit

Step 12: Install Docker

12.1 Add Docker Repository

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update

12.2 Install Docker Packages

sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

12.3 Configure Docker for Production Service Account

sudo usermod -aG docker PROD_SERVICE_USER

Step 13: Configure Docker for Harbor Access

Important: The Production Linode needs to be able to pull Docker images from the Harbor registry on the CI/CD Linode.

# Add the CI/CD Harbor registry to Docker's insecure registries
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json << EOF
{
  "insecure-registries": ["YOUR_CI_CD_IP:8080"]
}
EOF

# Restart Docker to apply changes
sudo systemctl restart docker

Important: Replace YOUR_CI_CD_IP with your actual CI/CD Linode IP address.

Step 14: Set Up SSH Key Authentication

14.1 Add CI/CD Public Key

# Create .ssh directory for PROD_SERVICE_USER
mkdir -p ~/.ssh
chmod 700 ~/.ssh

# Add the CI/CD public key (copy from CI/CD Linode)
echo "YOUR_CI_CD_PUBLIC_KEY" >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

Important: Replace YOUR_CI_CD_PUBLIC_KEY with the public key from the CI/CD Linode (the output from cat ~/.ssh/id_ed25519.pub on the CI/CD Linode).

14.2 Test SSH Connection

From the CI/CD Linode, test the SSH connection:

ssh production

Expected output: You should be able to SSH to the production server without a password prompt.

Step 15: Set Up Forgejo Runner for Production Deployment

Important: The Production Linode needs a Forgejo runner to execute the deployment job from the CI/CD workflow. This runner will pull images from Harbor and deploy using docker-compose.prod.yml.

15.1 Install Forgejo Runner

# Download the latest Forgejo runner
wget -O forgejo-runner https://codeberg.org/forgejo/runner/releases/download/v4.0.0/forgejo-runner-linux-amd64

# Make it executable
chmod +x forgejo-runner

# Move to system location
sudo mv forgejo-runner /usr/bin/forgejo-runner

# Verify installation
forgejo-runner --version

15.2 Set Up Runner Directory for PROD_SERVICE_USER

# Create runner directory owned by PROD_SERVICE_USER
sudo mkdir -p /opt/forgejo-runner
sudo chown PROD_SERVICE_USER:PROD_SERVICE_USER /opt/forgejo-runner

15.3 Get Registration Token

  1. Go to your Forgejo repository
  2. Navigate to Settings → Actions → Runners
  3. Click "New runner"
  4. Copy the registration token

15.4 Register the Production Runner

# Switch to PROD_SERVICE_USER
sudo su - PROD_SERVICE_USER

# Register the runner with production label
forgejo-runner register \
  --instance https://your-forgejo-instance \
  --token YOUR_REGISTRATION_TOKEN \
  --name "production-runner" \
  --labels "prod" \
  --no-interactive

# Copy configuration to system location
sudo cp /home/PROD_SERVICE_USER/.runner /opt/forgejo-runner/.runner
sudo chown PROD_SERVICE_USER:PROD_SERVICE_USER /opt/forgejo-runner/.runner
sudo chmod 600 /opt/forgejo-runner/.runner

Important: Replace your-forgejo-instance with your actual Forgejo instance URL and YOUR_REGISTRATION_TOKEN with the token you copied from Step 15.3.

15.5 Create Systemd Service

# Create systemd service file
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
[Unit]
Description=Forgejo Actions Runner (Production)
After=network.target docker.service

[Service]
Type=simple
User=PROD_SERVICE_USER
WorkingDirectory=/opt/forgejo-runner
ExecStart=/usr/bin/forgejo-runner daemon
Restart=always
RestartSec=10
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

[Install]
WantedBy=multi-user.target
EOF

# Enable and start the service
sudo systemctl daemon-reload
sudo systemctl enable forgejo-runner.service
sudo systemctl start forgejo-runner.service

# Verify the service is running
sudo systemctl status forgejo-runner.service

15.6 Test Runner Configuration

# Check if the runner is running
sudo systemctl status forgejo-runner.service

# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager

# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "production-runner" with status "Online"

Expected Output:

  • systemctl status should show "active (running)"
  • Forgejo web interface should show the runner as online with "prod" label

Important: The CI/CD workflow (.forgejo/workflows/ci.yml) is already configured to use this production runner. The deploy job runs on runs-on: [self-hosted, prod], which means it will execute on any runner with the "prod" label. When the workflow runs, it will:

  1. Pull the latest Docker images from Harbor registry
  2. Use the docker-compose.prod.yml file to deploy the application stack
  3. Create the necessary environment variables for production deployment
  4. Verify that all services are healthy after deployment

The production runner will automatically handle the deployment process when you push to the main branch.

15.7 Understanding the Production Docker Compose Setup

The docker-compose.prod.yml file is specifically designed for production deployment and differs from development setups:

Key Features:

  • Image-based deployment: Uses pre-built images from Harbor registry instead of building from source
  • Production networking: All services communicate through a dedicated sharenet-network
  • Health checks: Each service includes health checks to ensure proper startup order
  • Nginx reverse proxy: Includes Nginx for SSL termination, load balancing, and security headers
  • Persistent storage: PostgreSQL data is stored in a named volume for persistence
  • Environment variables: Uses environment variables for configuration (set by the CI/CD workflow)

Service Architecture:

  1. PostgreSQL: Database with health checks and persistent storage
  2. Backend: Rust API service that waits for PostgreSQL to be healthy
  3. Frontend: Next.js application that waits for backend to be healthy
  4. Nginx: Reverse proxy that serves the frontend and proxies API requests to backend

Deployment Process:

  1. The production runner pulls the latest images from Harbor registry
  2. Creates environment variables for the deployment
  3. Runs docker compose -f docker-compose.prod.yml up -d
  4. Waits for all services to be healthy
  5. Verifies the deployment was successful

Step 16: Configure Security

16.1 Configure Firewall

sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 3000/tcp
sudo ufw allow 3001/tcp

16.2 Configure Fail2ban

sudo systemctl enable fail2ban
sudo systemctl start fail2ban

Step 17: Test Production Setup

17.1 Test Docker Installation

docker --version
docker compose --version

17.2 Test Harbor Access

# Test pulling an image from the CI/CD Harbor registry
docker pull YOUR_CI_CD_IP:8080/public/backend:latest

Important: Replace YOUR_CI_CD_IP with your actual CI/CD Linode IP address.

17.3 Test Application Deployment

cd /opt/APP_NAME
docker compose up -d

17.4 Verify Application Status

docker compose ps
curl http://localhost:3000
curl http://localhost:3001/health

Expected Output:

  • All containers should be running
  • Frontend should be accessible on port 3000
  • Backend health check should return 200 OK

Part 3: Final Configuration and Testing

Step 19: Configure Forgejo Repository Secrets

Go to your Forgejo repository and add these secrets in Settings → Secrets and Variables → Actions:

Required Secrets:

  • CI_HOST: Your CI/CD Linode IP address (used for Harbor registry access)
  • PRODUCTION_IP: Your Production Linode IP address
  • PROD_DEPLOY_USER: The production deployment user name (e.g., prod-deploy)
  • PROD_SERVICE_USER: The production service user name (e.g., prod-service)
  • APP_NAME: Your application name (e.g., sharenet)
  • POSTGRES_PASSWORD: A strong password for the PostgreSQL database
  • HARBOR_CI_USER: Harbor username for CI operations (e.g., ci-user)
  • HARBOR_CI_PASSWORD: Harbor password for CI operations (the password you set for ci-user)

Optional Secrets (for domain users):

  • DOMAIN: Your domain name (e.g., example.com)
  • EMAIL: Your email for SSL certificate notifications

Step 20: Test Complete Pipeline

20.1 Trigger a Test Build

  1. Make a small change to your repository (e.g., update a comment or add a test file)
  2. Commit and push the changes to trigger the CI/CD pipeline
  3. Monitor the build in your Forgejo repository → Actions tab

20.2 Verify Pipeline Steps

The pipeline should execute these steps in order:

  1. Checkout: Clone the repository
  2. Setup DinD: Configure Docker-in-Docker environment
  3. Test Backend: Run backend tests in isolated environment
  4. Test Frontend: Run frontend tests in isolated environment
  5. Build Backend: Build backend Docker image in DinD
  6. Build Frontend: Build frontend Docker image in DinD
  7. Push to Registry: Push images to Harbor registry from DinD
  8. Deploy to Production: Deploy to production server

20.3 Check Harbor

# On CI/CD Linode
cd /opt/APP_NAME/registry

# Check if new images were pushed
curl -k https://localhost:8080/v2/_catalog

# Check specific repository tags
curl -k https://localhost:8080/v2/public/backend/tags/list
curl -k https://localhost:8080/v2/public/frontend/tags/list

20.4 Verify Production Deployment

# On Production Linode
cd /opt/APP_NAME

# Check if containers are running with new images
docker compose ps

# Check application health
curl http://localhost:3000
curl http://localhost:3001/health

# Check container logs for any errors
docker compose logs backend
docker compose logs frontend

20.5 Test Application Functionality

  1. Frontend: Visit your production URL (IP or domain)
  2. Backend API: Test API endpoints
  3. Database: Verify database connections
  4. Logs: Check for any errors in application logs

Step 21: Set Up SSL/TLS (Optional - Domain Users)

21.1 Install SSL Certificate

If you have a domain pointing to your Production Linode:

# On Production Linode
sudo certbot --nginx -d your-domain.com

# Verify certificate
sudo certbot certificates

21.2 Configure Auto-Renewal

# Test auto-renewal
sudo certbot renew --dry-run

# Add to crontab for automatic renewal
sudo crontab -e
# Add this line:
# 0 12 * * * /usr/bin/certbot renew --quiet

Step 22: Final Verification

22.1 Security Check

# Check firewall status
sudo ufw status

# Check fail2ban status
sudo systemctl status fail2ban

# Check SSH access (should be key-based only)
sudo grep "PasswordAuthentication" /etc/ssh/sshd_config

22.2 Performance Check

# Check system resources
htop

# Check disk usage
df -h

# Check Docker disk usage
docker system df

22.3 Backup Verification

# Test backup script
cd /opt/APP_NAME
./scripts/backup.sh --dry-run

# Run actual backup
./scripts/backup.sh

Step 23: Documentation and Maintenance

23.1 Update Documentation

  1. Update README.md with deployment information
  2. Document environment variables and their purposes
  3. Create troubleshooting guide for common issues
  4. Document backup and restore procedures

23.2 Set Up Monitoring Alerts

# Set up monitoring cron job
(crontab -l 2>/dev/null; echo "*/5 * * * * cd /opt/APP_NAME && ./scripts/monitor.sh --type production >> /tmp/monitor.log 2>&1") | crontab -

# Check monitoring logs
tail -f /tmp/monitor.log

23.3 Regular Maintenance Tasks

Daily:

  • Check application logs for errors
  • Monitor system resources
  • Verify backup completion

Weekly:

  • Review security logs
  • Update system packages
  • Test backup restoration

Monthly:

  • Review and rotate logs
  • Update SSL certificates
  • Review and update documentation

🎉 Congratulations!

You have successfully set up a complete CI/CD pipeline with:

  • Automated testing on every code push in isolated DinD environment
  • Docker image building and Harbor registry storage
  • Automated deployment to production
  • Health monitoring and logging
  • Backup and cleanup automation
  • Security hardening with proper user separation
  • SSL/TLS support for production (optional)
  • Zero resource contention between CI/CD and Harbor

Your application is now ready for continuous deployment with proper security, monitoring, and maintenance procedures in place!

Step 8.6 CI/CD Workflow Summary Table

Stage What Runs How/Where
Test All integration/unit tests docker-compose.test.yml
Build Build & push images Direct Docker commands
Deploy Deploy to production docker-compose.prod.yml

How it works:

  • Test: The workflow spins up a full test environment using docker-compose.test.yml (Postgres, backend, frontend, etc.) and runs all tests inside containers.
  • Build: If tests pass, the workflow uses direct Docker commands (no compose file) to build backend and frontend images and push them to Harbor.
  • Deploy: The production runner pulls images from Harbor and deploys the stack using docker-compose.prod.yml.

Expected Output:

  • Each stage runs in its own isolated environment.
  • Test failures stop the pipeline before any images are built or deployed.
  • Only tested images are deployed to production.

Manual Testing with docker-compose.test.yml

You can use the same test environment locally that the CI pipeline uses for integration testing. This is useful for debugging, development, or verifying your setup before pushing changes.

Start the Test Environment

docker compose -f docker-compose.test.yml up -d

This will start all services needed for integration tests (PostgreSQL, backend, frontend, etc.) in the background.

Check Service Health

docker compose -f docker-compose.test.yml ps

Look for the healthy status in the output to ensure all services are ready.

Run Tests Manually

You can now exec into the containers to run tests or commands as needed. For example:

# Run backend tests
docker exec ci-cd-test-rust cargo test --all

# Run frontend tests
docker exec ci-cd-test-node npm run test

Cleanup

When you're done, stop and remove all test containers:

docker compose -f docker-compose.test.yml down

Tip: This is the same environment and process used by the CI pipeline, so passing tests here means they should also pass in CI.