sharenet/CI_CD_PIPELINE_SETUP_GUIDE.md
continuist a59a7c0e74
Some checks are pending
CI/CD Pipeline (DinD) / Test Backend (DinD) (push) Waiting to run
CI/CD Pipeline (DinD) / Test Frontend (DinD) (push) Waiting to run
CI/CD Pipeline (DinD) / Build and Push Docker Images (DinD) (push) Blocked by required conditions
CI/CD Pipeline (DinD) / Deploy to Production (push) Blocked by required conditions
Initial update to use Docker-in-Docker for CI
2025-06-29 19:58:03 -04:00

72 KiB

CI/CD Pipeline Setup Guide

This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments using Docker-in-Docker (DinD) for isolated CI operations.

Architecture Overview

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Forgejo Host  │    │   CI/CD Linode  │    │ Production Linode│
│   (Repository)  │    │ (Actions Runner)│    │ (Docker Deploy) │
│                 │    │ + Harbor Registry│   │ + DinD Container│
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         │                       │                       │
         └─────────── Push ──────┼───────────────────────┘
                                 │
                                 └─── Deploy ────────────┘

Pipeline Flow

  1. Code Push: Developer pushes code to Forgejo repository
  2. Automated Testing: CI/CD Linode runs tests in isolated DinD environment
  3. Image Building: If tests pass, Docker images are built within DinD
  4. Registry Push: Images are pushed to Harbor registry from DinD
  5. Production Deployment: Production Linode pulls images and deploys
  6. Health Check: Application is verified and accessible

Key Benefits of DinD Approach

For Rust Testing:

  • Fresh environment every test run
  • Parallel execution capability
  • Isolated dependencies - no test pollution
  • Fast cleanup - just restart DinD container

For CI/CD Operations:

  • Zero resource contention with Harbor
  • Simple cleanup - one-line container restart
  • Perfect isolation - CI/CD can't affect Harbor
  • Consistent environment - same setup every time

For Maintenance:

  • Reduced complexity - no complex cleanup scripts
  • Easy debugging - isolated environment
  • Reliable operations - no interference between services

Prerequisites

  • Two Ubuntu 24.04 LTS Linodes with root access
  • Basic familiarity with Linux commands and SSH
  • Forgejo repository with Actions enabled
  • Optional: Domain name for Production Linode (for SSL/TLS)

Quick Start

  1. Set up CI/CD Linode (Steps 1-14)
  2. Set up Production Linode (Steps 15-27)
  3. Configure SSH key exchange (Step 28)
  4. Set up Forgejo repository secrets (Step 29)
  5. Test the complete pipeline (Step 30)

What's Included

CI/CD Linode Features

  • Forgejo Actions runner for automated builds
  • Docker-in-Docker (DinD) container for isolated CI operations
  • Harbor container registry for image storage
  • Harbor web UI for image management
  • Built-in vulnerability scanning with Trivy
  • Role-based access control and audit logs
  • Secure SSH communication with production
  • Simplified cleanup - just restart DinD container

Production Linode Features

  • Docker-based application deployment
  • Optional SSL/TLS certificate management (if domain is provided)
  • Nginx reverse proxy with security headers
  • Automated backups and monitoring
  • Firewall and fail2ban protection

Pipeline Features

  • Automated testing on every code push in isolated environment
  • Automated image building and registry push from DinD
  • Automated deployment to production
  • Rollback capability with image versioning
  • Health monitoring and logging
  • Zero resource contention between CI/CD and Harbor

Security Model and User Separation

This setup uses a principle of least privilege approach with separate users for different purposes:

User Roles

  1. Root User

    • Purpose: Initial system setup only
    • SSH Access: Disabled after setup
    • Privileges: Full system access (used only during initial configuration)
  2. Deployment User (DEPLOY_USER)

    • Purpose: SSH access, deployment tasks, system administration
    • SSH Access: Enabled with key-based authentication
    • Privileges: Sudo access for deployment and administrative tasks
    • Examples: deploy, ci, admin
  3. Service Account (SERVICE_USER)

    • Purpose: Running application services (Docker containers, databases)
    • SSH Access: None (no login shell)
    • Privileges: No sudo access, minimal system access
    • Examples: appuser, service, app

Security Benefits

  • No root SSH access: Eliminates the most common attack vector
  • Principle of least privilege: Each user has only the access they need
  • Separation of concerns: Deployment tasks vs. service execution are separate
  • Audit trail: Clear distinction between deployment and service activities
  • Reduced attack surface: Service account has minimal privileges

File Permissions

  • Application files: Owned by SERVICE_USER for security
  • Docker operations: Run by DEPLOY_USER with sudo (deployment only)
  • Service execution: Run by SERVICE_USER (no sudo needed)

Prerequisites and Initial Setup

What's Already Done (Assumptions)

This guide assumes you have already:

  1. Created two Ubuntu 24.04 LTS Linodes with root access
  2. Set root passwords for both Linodes
  3. Have SSH client installed on your local machine
  4. Have Forgejo repository with Actions enabled
  5. Optional: Domain name pointing to Production Linode's IP addresses

Step 0: Initial SSH Access and Verification

Before proceeding with the setup, you need to establish initial SSH access to both Linodes.

0.1 Get Your Linode IP Addresses

From your Linode dashboard, note the IP addresses for:

  • CI/CD Linode: YOUR_CI_CD_IP (IP address only, no domain needed)
  • Production Linode: YOUR_PRODUCTION_IP (IP address for SSH, domain for web access)

0.2 Test Initial SSH Access

Test SSH access to both Linodes:

# Test CI/CD Linode (IP address only)
ssh root@YOUR_CI_CD_IP

# Test Production Linode (IP address only)
ssh root@YOUR_PRODUCTION_IP

Expected output: SSH login prompt asking for root password.

If something goes wrong:

  • Verify the IP addresses are correct
  • Check that SSH is enabled on the Linodes
  • Ensure your local machine can reach the Linodes (no firewall blocking)

0.3 Choose Your Names

Before proceeding, decide on:

  1. Service Account Name: Choose a username for the service account (e.g., appuser, deploy, service)

    • Replace SERVICE_USER in this guide with your chosen name
    • This account runs the actual application services
  2. Deployment User Name: Choose a username for deployment tasks (e.g., deploy, ci, admin)

    • Replace DEPLOY_USER in this guide with your chosen name
    • This account has sudo privileges for deployment tasks
  3. Application Name: Choose a name for your application (e.g., myapp, webapp, api)

    • Replace APP_NAME in this guide with your chosen name
  4. Domain Name (Optional): If you have a domain, note it for SSL configuration

    • Replace your-domain.com in this guide with your actual domain

Example:

  • If you choose appuser as service account, deploy as deployment user, and myapp as application name:
    • Replace all SERVICE_USER with appuser
    • Replace all DEPLOY_USER with deploy
    • Replace all APP_NAME with myapp
    • If you have a domain example.com, replace your-domain.com with example.com

Security Model:

  • Service Account (SERVICE_USER): Runs application services, no sudo access
  • Deployment User (DEPLOY_USER): Handles deployments via SSH, has sudo access
  • Root: Only used for initial setup, then disabled for SSH access

0.4 Set Up SSH Key Authentication for Local Development

Important: This step should be done on both Linodes to enable secure SSH access from your local development machine.

0.4.1 Generate SSH Key on Your Local Machine

On your local development machine, generate an SSH key pair:

# Generate SSH key pair (if you don't already have one)
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""

# Or use existing key if you have one
ls ~/.ssh/id_ed25519.pub
0.4.2 Add Your Public Key to Both Linodes

Copy your public key to both Linodes:

# Copy your public key to CI/CD Linode
ssh-copy-id root@YOUR_CI_CD_IP

# Copy your public key to Production Linode
ssh-copy-id root@YOUR_PRODUCTION_IP

Alternative method (if ssh-copy-id doesn't work):

# Copy your public key content
cat ~/.ssh/id_ed25519.pub

# Then manually add to each server
ssh root@YOUR_CI_CD_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys

ssh root@YOUR_PRODUCTION_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
0.4.3 Test SSH Key Authentication

Test that you can access both servers without passwords:

# Test CI/CD Linode
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'

# Test Production Linode
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'

Expected output: The echo messages should appear without password prompts.

0.4.4 Create Deployment Users

On both Linodes, create the deployment user with sudo privileges:

# Create deployment user
sudo useradd -m -s /bin/bash DEPLOY_USER
sudo usermod -aG sudo DEPLOY_USER

# Set a secure password (for emergency access only)
echo "DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd

# Copy your SSH key to the deployment user
sudo mkdir -p /home/DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/DEPLOY_USER/.ssh/
sudo chown -R DEPLOY_USER:DEPLOY_USER /home/DEPLOY_USER/.ssh
sudo chmod 700 /home/DEPLOY_USER/.ssh
sudo chmod 600 /home/DEPLOY_USER/.ssh/authorized_keys

# Configure sudo to use SSH key authentication (most secure)
echo "DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/DEPLOY_USER

Security Note: This configuration allows the DEPLOY_USER to use sudo without a password, which is more secure for CI/CD automation since there are no passwords to store or expose. The random password is set for emergency console access only.

0.4.5 Test Sudo Access

Test that the deployment user can use sudo without password prompts:

# Test sudo access
ssh DEPLOY_USER@YOUR_CI_CD_IP 'sudo whoami'
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'sudo whoami'

Expected output: Both commands should return root without prompting for a password.

0.4.6 Test Deployment User Access

Test that you can access both servers as the deployment user:

# Test CI/CD Linode
ssh DEPLOY_USER@YOUR_CI_CD_IP 'echo "Deployment user SSH access works for CI/CD"'

# Test Production Linode
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Deployment user SSH access works for Production"'

Expected output: The echo messages should appear without password prompts.

0.4.7 Create SSH Config for Easy Access

On your local machine, create an SSH config file for easy access:

# Create SSH config
cat > ~/.ssh/config << 'EOF'
Host ci-cd-dev
    HostName YOUR_CI_CD_IP
    User DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no

Host production-dev
    HostName YOUR_PRODUCTION_IP
    User DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no
EOF

chmod 600 ~/.ssh/config

Now you can access servers easily:

ssh ci-cd-dev
ssh production-dev

Part 1: CI/CD Linode Setup

Step 1: Initial System Setup

1.1 Update the System

sudo apt update && sudo apt upgrade -y

What this does: Updates package lists and upgrades all installed packages to their latest versions.

Expected output: A list of packages being updated, followed by completion messages.

1.2 Configure Timezone

# Configure timezone interactively
sudo dpkg-reconfigure tzdata

# Verify timezone setting
date

What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).

Expected output: After selecting your timezone, the date command should show the current date and time in your selected timezone.

1.3 Configure /etc/hosts

# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts

# Verify the configuration
cat /etc/hosts

What this does:

  • Adds localhost entries for both IPv4 and IPv6 addresses to /etc/hosts
  • Ensures proper localhost resolution for both IPv4 and IPv6

Important: Replace YOUR_CI_CD_IPV4_ADDRESS and YOUR_CI_CD_IPV6_ADDRESS with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.

Expected output: The /etc/hosts file should show entries for 127.0.0.1, ::1, and your Linode's actual IP addresses all mapping to localhost.

1.4 Install Essential Packages

sudo apt install -y \
    curl \
    wget \
    git \
    build-essential \
    pkg-config \
    libssl-dev \
    ca-certificates \
    apt-transport-https \
    software-properties-common \
    apache2-utils

What this does: Installs development tools, SSL libraries, and utilities needed for Docker and application building.

Step 2: Create Users

2.1 Create Service Account

# Create dedicated group for the service account
sudo groupadd -r SERVICE_USER

# Create service account user with dedicated group
sudo useradd -r -g SERVICE_USER -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd

2.2 Verify Users

sudo su - SERVICE_USER
whoami
pwd
exit

sudo su - DEPLOY_USER
whoami
pwd
exit

Step 3: Clone Repository for Registry Configuration

# Switch to DEPLOY_USER (who has sudo access)
sudo su - DEPLOY_USER

# Create application directory and clone repository
sudo mkdir -p /opt/APP_NAME
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME
cd /opt
sudo git clone https://your-forgejo-instance/your-username/APP_NAME.git
sudo chown -R SERVICE_USER:SERVICE_USER APP_NAME/

# Verify the registry folder exists
ls -la /opt/APP_NAME/registry/

Important: Replace your-forgejo-instance, your-username, and APP_NAME with your actual Forgejo instance URL, username, and application name.

What this does:

  • DEPLOY_USER creates the directory structure and clones the repository
  • SERVICE_USER owns all the files for security
  • Registry configuration files are now available at /opt/APP_NAME/registry/

Step 4: Install Docker

4.1 Add Docker Repository

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update

4.2 Install Docker Packages

sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

4.3 Configure Docker for Service Account

sudo usermod -aG docker SERVICE_USER

Step 5: Set Up Harbor Container Registry

5.1 Generate SSL Certificates

# Create system SSL directory for Harbor certificates
sudo mkdir -p /etc/ssl/registry

# Get your actual IP address
YOUR_ACTUAL_IP=$(curl -4 -s ifconfig.me)
echo "Your IP address is: $YOUR_ACTUAL_IP"

# Create OpenSSL configuration file with proper SANs
sudo tee /etc/ssl/registry/openssl.conf << EOF
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no

[req_distinguished_name]
C = US
ST = State
L = City
O = Organization
CN = $YOUR_ACTUAL_IP

[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
IP.1 = $YOUR_ACTUAL_IP
DNS.1 = $YOUR_ACTUAL_IP
DNS.2 = localhost
EOF

# Generate self-signed certificate with proper SANs
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/ssl/registry/registry.key -out /etc/ssl/registry/registry.crt -days 365 -nodes -extensions v3_req -config /etc/ssl/registry/openssl.conf

# Set proper permissions
sudo chmod 600 /etc/ssl/registry/registry.key
sudo chmod 644 /etc/ssl/registry/registry.crt
sudo chmod 644 /etc/ssl/registry/openssl.conf

Important: The certificate is now generated with proper Subject Alternative Names (SANs) including your IP address, which is required for TLS certificate validation by Docker and other clients.

Note: The permissions are set to:

  • registry.key: 600 (owner read/write only) - private key must be secure
  • registry.crt: 644 (owner read/write, group/others read) - certificate can be read by services
  • openssl.conf: 644 (owner read/write, group/others read) - configuration file for reference

5.1.1 Configure Docker to Trust Harbor Registry

# Add the certificate to system CA certificates
sudo cp /etc/ssl/registry/registry.crt /usr/local/share/ca-certificates/registry.crt
sudo update-ca-certificates

# Configure Docker to trust the Harbor registry
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json << EOF
{
  "insecure-registries": ["YOUR_CI_CD_IP"],
  "registry-mirrors": []
}
EOF

# Restart Docker to apply the new configuration
sudo systemctl restart docker

Important: Replace YOUR_CI_CD_IP with your actual CI/CD Linode IP address. This configuration tells Docker to trust your Harbor registry and allows Docker login to work properly.

5.2 Generate Secure Passwords and Secrets

# Set environment variables for Harbor
export HARBOR_HOSTNAME=$YOUR_ACTUAL_IP
export HARBOR_ADMIN_PASSWORD="Harbor12345"

# Generate secure database password for Harbor
export DB_PASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-25)

# Generate secure secrets for Harbor
export CORE_SECRET=$(openssl rand -hex 16)
export JOBSERVICE_SECRET=$(openssl rand -hex 16)

echo "Generated secrets:"
echo "DB_PASSWORD: $DB_PASSWORD"
echo "CORE_SECRET: $CORE_SECRET"
echo "JOBSERVICE_SECRET: $JOBSERVICE_SECRET"

# Save secrets securely for future reference
cat > /opt/APP_NAME/harbor-secrets.txt << EOF
# Harbor Secrets - KEEP THESE SECURE!
# Generated on: $(date)
# CI/CD IP: $YOUR_ACTUAL_IP

HARBOR_HOSTNAME=$HARBOR_HOSTNAME
HARBOR_ADMIN_PASSWORD=$HARBOR_ADMIN_PASSWORD
DB_PASSWORD=$DB_PASSWORD
CORE_SECRET=$CORE_SECRET
JOBSERVICE_SECRET=$JOBSERVICE_SECRET

# IMPORTANT: Store this file securely and keep a backup!
# You will need these secrets for:
# - Harbor upgrades
# - Database troubleshooting
# - Disaster recovery
# - Service restoration
EOF

# Set secure permissions on secrets file
chmod 600 /opt/APP_NAME/harbor-secrets.txt
echo "Secrets saved to /opt/APP_NAME/harbor-secrets.txt"
echo "IMPORTANT: Keep this file secure and backed up!"

Important:

  • Change the default passwords for production use. The default admin password is Harbor12345 - change this immediately after first login.
  • The generated secrets (CORE_SECRET and JOBSERVICE_SECRET) are cryptographically secure random values used for encrypting sensitive data.
  • Store these secrets securely as they will be needed for Harbor upgrades or troubleshooting.
  • CRITICAL: The secrets file contains sensitive information. Keep it secure and backed up!

5.3 Install Harbor Using Official Installer

# Switch to DEPLOY_USER (who has sudo access)
sudo su - DEPLOY_USER

cd /opt/APP_NAME

# Download Harbor 2.10.0 offline installer
sudo wget https://github.com/goharbor/harbor/releases/download/v2.10.0/harbor-offline-installer-v2.10.0.tgz

sudo tar -xzf harbor-offline-installer-v2.10.0.tgz

cd harbor
sudo cp harbor.yml.tmpl harbor.yml

# Edit harbor.yml configuration
sudo nano harbor.yml

Important: In the harbor.yml file, update the following variables:

  • hostname: YOUR_CI_CD_IP (replace with your actual IP)
  • certificate: /etc/ssl/registry/registry.crt
  • private_key: /etc/ssl/registry/registry.key
  • password: <the DB_PASSWORD generated in Step 5.2>

Note: Leave harbor_admin_password as Harbor12345 for now. This will be changed at first login through the UI after launching Harbor.

5.4 Prepare and Install Harbor

# Prepare Harbor configuration
sudo ./prepare

# Install Harbor with Trivy vulnerability scanner
sudo ./install.sh --with-trivy

cd ..

# Change harbor folder permissions recursively to SERVICE_USER
sudo chown -R SERVICE_USER:SERVICE_USER harbor

# Switch to SERVICE_USER to run installation again as non-root
sudo su - SERVICE_USER

cd /opt/APP_NAME/harbor

# Install Harbor as SERVICE_USER (permissions are partially adjusted correctly)
./install.sh --with-trivy

# Exit SERVICE_USER shell
exit

5.5 Fix Permission Issues

# Switch back to DEPLOY_USER to adjust the permissions for various env files
cd /opt/APP_NAME/harbor

sudo chown SERVICE_USER:SERVICE_USER common/config/jobservice/env
sudo chown SERVICE_USER:SERVICE_USER common/config/db/env
sudo chown SERVICE_USER:SERVICE_USER common/config/registryctl/env
sudo chown SERVICE_USER:SERVICE_USER common/config/trivy-adapter/env
sudo chown SERVICE_USER:SERVICE_USER common/config/core/env

# Exit DEPLOY_USER shell
exit

5.6 Test Harbor Installation

# Switch to SERVICE_USER
sudo su - SERVICE_USER

cd /opt/APP_NAME/harbor

# Verify you can stop Harbor. All Harbor containers should stop.
docker compose down

# Verify you can bring Harbor back up. All Harbor containers should start back up.
docker compose up -d

# Exit SERVICE_USER shell
exit

Important: Harbor startup can take 2-3 minutes as it initializes the database and downloads vulnerability databases. The health check will ensure all services are running properly.

5.7 Wait for Harbor Startup

# Monitor Harbor startup progress
cd /opt/APP_NAME/harbor
docker compose logs -f

Expected output: You should see logs from all Harbor services (core, database, redis, registry, portal, nginx, jobservice, trivy) starting up. Wait until you see "Harbor has been installed and started successfully" or similar success messages.

5.8 Test Harbor Setup

# Check if all Harbor containers are running
cd /opt/APP_NAME/harbor
docker compose ps

# Test Harbor API (HTTPS)
curl -k https://localhost/api/v2.0/health

# Test Harbor UI (HTTPS)
curl -k -I https://localhost

# Expected output: HTTP/1.1 200 OK

Important: All Harbor services should show as "Up" in the docker compose ps output. The health check should return a JSON response indicating all services are healthy.

5.9 Access Harbor Web UI

  1. Open your browser and navigate to: https://YOUR_CI_CD_IP
  2. Login with default credentials:
    • Username: admin
    • Password: Harbor12345 (or your configured password)
  3. Change the admin password:
    • Click on the user icon "admin" in the top right corner of the UI
    • Click "Change Password" from the dropdown menu
    • Enter your current password and a new secure password
    • Click "OK" to save the new password

5.10 Configure Harbor for Public Read, Authenticated Write

  1. Create Application Project:

    • Go to ProjectsNew Project
    • Set Project Name: APP_NAME (replace with your actual application name)
    • Set Access Level: Public
    • Click OK
  2. Create a User for CI/CD:

    • Go to AdministrationUsersNew User
    • Set Username: ci-user
    • Set Email: ci@example.com
    • Set Password: your-secure-password
    • Click OK
  3. Assign Project Role to ci-user:

    • Go to ProjectsAPP_NAMEMembers+ User
    • Select User: ci-user
    • Set Role: Developer
    • Click OK

Note: With a public project, anyone can pull images without authentication, but only authenticated users (like ci-user) can push images. This provides the perfect balance of ease of use for deployments and security for image management.

5.11 Test Harbor Authentication and Access Model

# Test Docker login to Harbor
docker login YOUR_CI_CD_IP
# Enter: ci-user and your-secure-password

# Create a test image
echo "FROM alpine:latest" > /tmp/test.Dockerfile
echo "RUN echo 'Hello from Harbor test image'" >> /tmp/test.Dockerfile

# Build and tag test image for APP_NAME project
docker build -f /tmp/test.Dockerfile -t YOUR_CI_CD_IP/APP_NAME/test:latest /tmp

# Push to Harbor (requires authentication)
docker push YOUR_CI_CD_IP/APP_NAME/test:latest

# Test public pull (no authentication required)
docker logout YOUR_CI_CD_IP
docker pull YOUR_CI_CD_IP/APP_NAME/test:latest

# Verify the image was pulled successfully
docker images | grep APP_NAME/test

# Test that unauthorized push is blocked
echo "FROM alpine:latest" > /tmp/unauthorized.Dockerfile
echo "RUN echo 'This push should fail'" >> /tmp/unauthorized.Dockerfile
docker build -f /tmp/unauthorized.Dockerfile -t YOUR_CI_CD_IP/APP_NAME/unauthorized:latest /tmp
docker push YOUR_CI_CD_IP/APP_NAME/unauthorized:latest
# Expected: This should fail with authentication error

# Clean up test images
docker rmi YOUR_CI_CD_IP/APP_NAME/test:latest
docker rmi YOUR_CI_CD_IP/APP_NAME/unauthorized:latest

Expected behavior:

  • Push requires authentication: docker push only works when logged in
  • Pull works without authentication: docker pull works without login for public projects
  • Unauthorized push is blocked: docker push fails when not logged in
  • Web UI accessible: Harbor UI is available at https://YOUR_CI_CD_IP

5.12 Harbor Access Model Summary

Your Harbor registry is now configured with the following access model:

APP_NAME Project:

  • Pull (read): No authentication required
  • Push (write): Requires authentication
  • Web UI: Accessible to view images

Security Features:

  • Vulnerability scanning: Automatic CVE scanning with Trivy
  • Role-based access control: Different user roles (admin, developer, guest)
  • Audit logs: Complete trail of all operations

5.13 Troubleshooting Common Harbor Issues

Certificate Issues:

# If you get "tls: failed to verify certificate" errors:
# 1. Verify certificate has proper SANs
openssl x509 -in /etc/ssl/registry/registry.crt -text -noout | grep -A 5 "Subject Alternative Name"

# 2. Regenerate certificate if SANs are missing
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/ssl/registry/registry.key -out /etc/ssl/registry/registry.crt -days 365 -nodes -extensions v3_req -config /etc/ssl/registry/openssl.conf

# 3. Restart Harbor and Docker
cd /opt/APP_NAME/harbor && docker compose down && docker compose up -d
sudo systemctl restart docker

Connection Issues:

# If you get "connection refused" errors:
# 1. Check if Harbor is running
docker compose ps

# 2. Check Harbor logs
docker compose logs

# 3. Verify ports are open
netstat -tuln | grep -E ':(80|443)'

Docker Configuration Issues:

# If Docker still can't connect after certificate fixes:
# 1. Verify Docker daemon configuration
cat /etc/docker/daemon.json

# 2. Check if certificate is in system CA store
ls -la /usr/local/share/ca-certificates/registry.crt

# 3. Update CA certificates and restart Docker
sudo update-ca-certificates
sudo systemctl restart docker

Step 6: Set Up SSH for Production Communication

6.1 Generate SSH Key Pair

Important: Run this command as the DEPLOY_USER (not root or SERVICE_USER). The DEPLOY_USER is responsible for deployment orchestration and SSH communication with the production server.

ssh-keygen -t ed25519 -C "ci-cd-server" -f ~/.ssh/id_ed25519 -N ""

What this does:

  • Creates an SSH key pair for secure communication between CI/CD and production servers
  • The DEPLOY_USER uses this key to SSH to the production server for deployments
  • The key is stored in the DEPLOY_USER's home directory for security

Security Note: The DEPLOY_USER handles deployment orchestration, while the SERVICE_USER runs the actual CI pipeline. This separation provides better security through the principle of least privilege.

6.2 Create SSH Config

cat > ~/.ssh/config << 'EOF'
Host production
    HostName YOUR_PRODUCTION_IP
    User DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null
EOF

chmod 600 ~/.ssh/config

Step 7: Install Forgejo Actions Runner

7.1 Download Runner

Important: Run this step as the DEPLOY_USER (not root or SERVICE_USER). The DEPLOY_USER handles deployment tasks including downloading and installing the Forgejo runner.

cd ~

# Get the latest version dynamically
LATEST_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases | jq -r '.[0].tag_name')
echo "Downloading Forgejo runner version: $LATEST_VERSION"

# Download the latest runner
wget https://code.forgejo.org/forgejo/runner/releases/download/${LATEST_VERSION}/forgejo-runner-${LATEST_VERSION#v}-linux-amd64
chmod +x forgejo-runner-${LATEST_VERSION#v}-linux-amd64
sudo mv forgejo-runner-${LATEST_VERSION#v}-linux-amd64 /usr/bin/forgejo-runner

Alternative: Pin to Specific Version (Recommended for Production)

If you prefer to pin to a specific version for stability, replace the dynamic download with:

cd ~
VERSION="v6.3.1"  # Pin to specific version
wget https://code.forgejo.org/forgejo/runner/releases/download/${VERSION}/forgejo-runner-${VERSION#v}-linux-amd64
chmod +x forgejo-runner-${VERSION#v}-linux-amd64
sudo mv forgejo-runner-${VERSION#v}-linux-amd64 /usr/bin/forgejo-runner

What this does:

  • Dynamic approach: Downloads the latest stable Forgejo Actions runner
  • Version pinning: Allows you to specify a known-good version for production
  • System installation: Installs the binary system-wide in /usr/bin/ for proper Linux structure
  • Makes the binary executable and available system-wide

Production Recommendation: Use version pinning in production environments to ensure consistency and avoid unexpected breaking changes.

7.2 Register Runner

Important: The runner must be registered with your Forgejo instance before it can start. This creates the required .runner configuration file.

Step 1: Get Permissions to Create Repository-level Runners

To create a repository-level runner, you need Repository Admin or Owner permissions. Here's how to check and manage permissions:

Check Your Current Permissions:

  1. Go to your repository: https://your-forgejo-instance/your-username/your-repo
  2. Look for the Settings tab in the repository navigation
  3. If you see Actions in the left sidebar under Settings, you have the right permissions
  4. If you don't see Settings or Actions, you don't have admin access

Add Repository Admin (Repository Owner Only):

If you're the repository owner and need to give someone else admin access:

  1. Go to Repository Settings:

    • Navigate to your repository
    • Click Settings tab
    • Click Collaborators in the left sidebar
  2. Add Collaborator:

    • Click Add Collaborator button
    • Enter the username or email of the person you want to add
    • Select Admin from the role dropdown
    • Click Add Collaborator
  3. Alternative: Manage Team Access (for Organizations):

    • Go to Settings → Collaborators
    • Click Manage Team Access
    • Add the team with Admin permissions

Repository Roles and Permissions:

Role Can Create Runners Can Manage Repository Can Push Code
Owner Yes Yes Yes
Admin Yes Yes Yes
Write No No Yes
Read No No No

If You Don't Have Permissions:

Option 1: Ask Repository Owner

  • Contact the person who owns the repository
  • Ask them to create the runner and share the registration token with you

Option 2: Use Organization/User Runner

  • If you have access to organization settings, create an org-level runner
  • Or create a user-level runner if you own other repositories

Option 3: Site Admin Help

  • Contact your Forgejo instance administrator to create a site-level runner

Site Administrator: Setting Repository Admin (Forgejo Instance Admin)

To add an existing user as an Administrator of an existing repository in Forgejo, follow these steps:

  1. Go to the repository: Navigate to the main page of the repository you want to manage.
  2. Access repository settings: Click on the "Settings" tab under your repository name.
  3. Go to Collaborators & teams: In the sidebar, under the "Access" section, click on "Collaborators & teams".
  4. Manage access: Under "Manage access", locate the existing user you want to make an administrator.
  5. Change their role: Next to the user's name, select the "Role" dropdown menu and click on "Administrator".

Important Note: If the user is already the Owner of the repository, then they do not have to add themselves as an Administrator of the repository and indeed cannot. Repository owners automatically have all administrative permissions.

Important Notes for Site Administrators:

  • Repository Admin can manage the repository but cannot modify site-wide settings
  • Site Admin retains full control over the Forgejo instance
  • Changes take effect immediately for the user
  • Consider the security implications of granting admin access

Step 2: Get Registration Token

  1. Go to your Forgejo repository
  2. Navigate to Settings → Actions → Runners
  3. Click "New runner"
  4. Copy the registration token

Step 3: Register the Runner

# Switch to DEPLOY_USER to register the runner
sudo su - DEPLOY_USER

cd ~

# Register the runner with your Forgejo instance
forgejo-runner register \
  --instance https://your-forgejo-instance \
  --token YOUR_REGISTRATION_TOKEN \
  --name "ci-cd-dind-runner" \
  --labels "ubuntu-latest,docker,dind" \
  --no-interactive

Important: Replace your-forgejo-instance with your actual Forgejo instance URL and YOUR_REGISTRATION_TOKEN with the token you copied from Step 2. Also make sure it ends in a /.

Note: The your-forgejo-instance should be the base URL of your Forgejo instance (e.g., https://git.<your-domain>/), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.

What this does:

  • Creates the required .runner configuration file in the DEPLOY_USER's home directory
  • Registers the runner with your Forgejo instance
  • Sets up the runner with appropriate labels for Ubuntu and Docker environments

Step 4: Set Up System Configuration

# Copy the runner configuration to system location
sudo cp /home/DEPLOY_USER/.runner /etc/forgejo-runner/.runner

# Set proper ownership and permissions
sudo chown SERVICE_USER:SERVICE_USER /etc/forgejo-runner/.runner
sudo chmod 600 /etc/forgejo-runner/.runner

Important: Replace your-forgejo-instance with your actual Forgejo instance URL and YOUR_REGISTRATION_TOKEN with the token you copied from Step 2.

Note: The your-forgejo-instance should be the base URL of your Forgejo instance (e.g., https://git.<your-domain>/), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.

What this does:

  • Creates the required .runner configuration file in the DEPLOY_USER's home directory
  • Copies the configuration to the system location (/etc/forgejo-runner/.runner)
  • Sets proper ownership and permissions for SERVICE_USER to access the config
  • Registers the runner with your Forgejo instance
  • Sets up the runner with appropriate labels for Ubuntu and Docker environments

Step 5: Create and Enable Systemd Service

# Create system config directory for Forgejo runner
sudo mkdir -p /etc/forgejo-runner

sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
[Unit]
Description=Forgejo Actions Runner
After=network.target

[Service]
Type=simple
User=SERVICE_USER
WorkingDirectory=/etc/forgejo-runner
ExecStart=/usr/bin/forgejo-runner daemon
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

# Enable the service
sudo systemctl daemon-reload
sudo systemctl enable forgejo-runner.service

What this does:

  • Creates the systemd service configuration for the Forgejo runner
  • Sets the working directory to /etc/forgejo-runner where the .runner file is located
  • Enables the service to start automatically on boot
  • Sets up proper restart behavior for reliability

7.3 Start Service

# Start the Forgejo runner service
sudo systemctl start forgejo-runner.service

# Verify the service is running
sudo systemctl status forgejo-runner.service

Expected Output: The service should show "active (running)" status.

What this does:

  • Starts the Forgejo runner daemon as a system service
  • The runner will now be available to accept and execute workflows from your Forgejo instance
  • The service will automatically restart if it crashes or the system reboots

7.4 Test Runner Configuration

# Check if the runner is running
sudo systemctl status forgejo-runner.service

# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager

# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-cd-dind-runner" with status "Online"

Expected Output:

  • systemctl status should show "active (running)"
  • Forgejo web interface should show the runner as online

If something goes wrong:

  • Check logs: sudo journalctl -u forgejo-runner.service -f
  • Verify token: Make sure the registration token is correct
  • Check network: Ensure the runner can reach your Forgejo instance
  • Restart service: sudo systemctl restart forgejo-runner.service

Step 8: Set Up Docker-in-Docker (DinD) for CI Operations

Important: This step sets up a Docker-in-Docker container that provides an isolated environment for CI/CD operations, eliminating resource contention with Harbor and simplifying cleanup.

8.1 Create DinD Container

# Create DinD container with persistent storage
docker run -d \
  --name ci-cd-dind \
  --privileged \
  --restart unless-stopped \
  -p 2376:2376 \
  -v ci-cd-data:/var/lib/docker \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker:dind

# Wait for DinD to start
sleep 15

# Test DinD connectivity
docker exec ci-cd-dind docker version

What this does:

  • Creates isolated environment: DinD container runs its own Docker daemon
  • Persistent storage: ci-cd-data volume preserves data between restarts
  • Privileged mode: Required for Docker-in-Docker functionality
  • Auto-restart: Container restarts automatically if it crashes
  • Docker socket access: Allows DinD to communicate with host Docker

8.2 Configure DinD for Harbor Registry

# Configure Docker daemon in DinD for Harbor registry
docker exec ci-cd-dind sh -c 'echo "{\"insecure-registries\": [\"localhost:5000\"]}" > /etc/docker/daemon.json'

# Reload Docker daemon in DinD
docker exec ci-cd-dind sh -c 'kill -HUP 1'

# Wait for Docker daemon to reload
sleep 5

# Test Harbor connectivity from DinD
docker exec ci-cd-dind docker pull alpine:latest
docker exec ci-cd-dind docker tag alpine:latest localhost:5000/test/alpine:latest
docker exec ci-cd-dind docker push localhost:5000/test/alpine:latest

# Clean up test image
docker exec ci-cd-dind docker rmi localhost:5000/test/alpine:latest

What this does:

  • Configures insecure registry: Allows DinD to push to Harbor without SSL verification
  • Tests connectivity: Verifies DinD can pull, tag, and push images to Harbor
  • Validates setup: Ensures the complete CI/CD pipeline will work

8.3 Create DinD Cleanup Script

# Create simplified cleanup script for DinD
cat > /opt/APP_NAME/scripts/dind-cleanup.sh << 'EOF'
#!/bin/bash

# Docker-in-Docker Cleanup Script
# This script provides a simple way to clean up the DinD environment
# by restarting the DinD container, which gives a fresh environment.

set -e

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color

# Logging functions
log_info() {
    echo -e "${BLUE}[INFO]${NC} $1"
}

log_success() {
    echo -e "${GREEN}[SUCCESS]${NC} $1"
}

log_warning() {
    echo -e "${YELLOW}[WARNING]${NC} $1"
}

log_error() {
    echo -e "${RED}[ERROR]${NC} $1"
}

show_help() {
    cat << EOF
Docker-in-Docker Cleanup Script

Usage: $0 [OPTIONS]

Options:
    --dry-run          Show what would be done without executing
    --help|-h          Show this help message

Examples:
    $0                 # Clean up DinD environment
    $0 --dry-run       # Show what would be done
EOF
}

# Parse command line arguments
DRY_RUN="false"

while [[ $# -gt 0 ]]; do
    case $1 in
        --dry-run)
            DRY_RUN="true"
            shift
            ;;
        --help|-h)
            show_help
            exit 0
            ;;
        *)
            log_error "Unknown option: $1"
            show_help
            exit 1
            ;;
    esac
done

# Main cleanup function
cleanup_dind() {
    echo
    echo "=================================================================================="
    echo "                    🧹 Docker-in-Docker Cleanup 🧹"
    echo "=================================================================================="
    echo

    # Check if DinD container exists
    if ! docker ps -a --format "{{.Names}}" | grep -q "^ci-cd-dind$"; then
        log_error "DinD container 'ci-cd-dind' not found!"
        log_info "Creating new DinD container..."
        
        if [ "$DRY_RUN" = "true" ]; then
            log_info "DRY RUN: Would create DinD container"
            return
        fi
        
        docker run -d \
          --name ci-cd-dind \
          --privileged \
          --restart unless-stopped \
          -p 2376:2376 \
          -v ci-cd-data:/var/lib/docker \
          -v /var/run/docker.sock:/var/run/docker.sock \
          docker:dind
        
        log_success "DinD container created successfully"
        return
    fi

    # Check if DinD container is running
    if docker ps --format "{{.Names}}" | grep -q "^ci-cd-dind$"; then
        log_info "DinD container is running"
        
        if [ "$DRY_RUN" = "true" ]; then
            log_info "DRY RUN: Would stop and restart DinD container"
            log_info "DRY RUN: This would clear all CI/CD artifacts and give fresh environment"
            return
        fi
        
        log_info "Stopping DinD container..."
        docker stop ci-cd-dind
        
        log_info "Removing DinD container..."
        docker rm ci-cd-dind
        
        log_info "Creating fresh DinD container..."
        docker run -d \
          --name ci-cd-dind \
          --privileged \
          --restart unless-stopped \
          -p 2376:2376 \
          -v ci-cd-data:/var/lib/docker \
          -v /var/run/docker.sock:/var/run/docker.sock \
          docker:dind
        
        # Wait for DinD to start
        log_info "Waiting for DinD to start..."
        sleep 10
        
        # Test DinD connectivity
        if timeout 30 bash -c 'until docker exec ci-cd-dind docker version >/dev/null 2>&1; do sleep 1; done'; then
            log_success "DinD container is ready!"
        else
            log_error "DinD container failed to start properly"
            exit 1
        fi
        
    else
        log_info "DinD container exists but is not running"
        
        if [ "$DRY_RUN" = "true" ]; then
            log_info "DRY RUN: Would remove and recreate DinD container"
            return
        fi
        
        log_info "Removing existing DinD container..."
        docker rm ci-cd-dind
        
        log_info "Creating fresh DinD container..."
        docker run -d \
          --name ci-cd-dind \
          --privileged \
          --restart unless-stopped \
          -p 2376:2376 \
          -v ci-cd-data:/var/lib/docker \
          -v /var/run/docker.sock:/var/run/docker.sock \
          docker:dind
        
        # Wait for DinD to start
        log_info "Waiting for DinD to start..."
        sleep 10
        
        # Test DinD connectivity
        if timeout 30 bash -c 'until docker exec ci-cd-dind docker version >/dev/null 2>&1; do sleep 1; done'; then
            log_success "DinD container is ready!"
        else
            log_error "DinD container failed to start properly"
            exit 1
        fi
    fi

    echo
    echo "=================================================================================="
    log_success "DinD cleanup completed successfully!"
    echo "=================================================================================="
    echo
    log_info "Benefits of this cleanup:"
    log_info "  ✅ Fresh Docker environment for CI/CD"
    log_info "  ✅ No resource contention with Harbor"
    log_info "  ✅ Clean state for Rust testing"
    log_info "  ✅ Isolated CI/CD operations"
    echo
}

# Show current DinD status
show_status() {
    echo "=================================================================================="
    echo "                    📊 DinD Status 📊"
    echo "=================================================================================="
    echo
    
    if docker ps -a --format "{{.Names}}" | grep -q "^ci-cd-dind$"; then
        log_info "DinD Container Status:"
        docker ps -a --filter "name=ci-cd-dind" --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
        echo
        
        if docker ps --format "{{.Names}}" | grep -q "^ci-cd-dind$"; then
            log_info "DinD Docker Info:"
            docker exec ci-cd-dind docker info --format "{{.ServerVersion}}" 2>/dev/null || log_warning "Cannot connect to DinD Docker daemon"
            echo
            
            log_info "DinD Images:"
            docker exec ci-cd-dind docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" 2>/dev/null || log_warning "Cannot list DinD images"
            echo
            
            log_info "DinD Containers:"
            docker exec ci-cd-dind docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.Image}}" 2>/dev/null || log_warning "Cannot list DinD containers"
        else
            log_warning "DinD container is not running"
        fi
    else
        log_warning "DinD container does not exist"
    fi
    
    echo "=================================================================================="
}

# Main execution
if [ "$DRY_RUN" = "true" ]; then
    echo
    echo "=================================================================================="
    echo "                            🚨 DRY RUN MODE 🚨"
    echo "                    No changes will be made"
    echo "=================================================================================="
    echo
    show_status
    cleanup_dind
else
    show_status
    cleanup_dind
fi
EOF

# Make the script executable
chmod +x /opt/APP_NAME/scripts/dind-cleanup.sh

What this does:

  • Creates cleanup script: Simple script to restart DinD container for fresh environment
  • Status monitoring: Shows current DinD container and Docker state
  • Dry-run mode: Allows testing without making changes
  • Error handling: Proper error checking and user feedback

8.4 Test DinD Setup

# Test DinD cleanup script
./scripts/dind-cleanup.sh --dry-run

# Test DinD functionality
docker exec ci-cd-dind docker run --rm alpine:latest echo "DinD is working!"

# Test Harbor integration
docker exec ci-cd-dind docker pull alpine:latest
docker exec ci-cd-dind docker tag alpine:latest localhost:5000/test/dind-test:latest
docker exec ci-cd-dind docker push localhost:5000/test/dind-test:latest

# Clean up test
docker exec ci-cd-dind docker rmi localhost:5000/test/dind-test:latest

Expected Output:

  • DinD container should be running and accessible
  • Docker commands should work inside DinD
  • Harbor push/pull should work from DinD
  • Cleanup script should show proper status

8.5 Set Up Automated DinD Cleanup

# Create a cron job to run DinD cleanup daily at 2 AM
(crontab -l 2>/dev/null; echo "0 2 * * * cd /opt/APP_NAME && ./scripts/dind-cleanup.sh >> /tmp/dind-cleanup.log 2>&1") | crontab -

# Verify the cron job was added
crontab -l

What this does:

  • Automated cleanup: Restarts DinD container daily for fresh environment
  • Prevents resource buildup: Clears CI/CD artifacts automatically
  • Maintains performance: Ensures consistent CI/CD performance
  • Zero Harbor impact: DinD cleanup doesn't affect Harbor operations

Step 9: Set Up Monitoring and Cleanup

9.1 Monitoring Script

Important: The repository includes a pre-configured monitoring script in the scripts/ directory that can be used for both CI/CD and production monitoring.

Repository Script:

  • scripts/monitor.sh - Comprehensive monitoring script with support for both CI/CD and production environments

To use the repository monitoring script:

# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME

# Make the script executable
chmod +x scripts/monitor.sh

# Test CI/CD monitoring
./scripts/monitor.sh --type ci-cd

# Test production monitoring (if you have a production setup)
./scripts/monitor.sh --type production

Note: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.

9.2 DinD Cleanup Script

Important: With the DinD setup, CI/CD operations are isolated in the DinD container. This means we can use a much simpler cleanup approach - just restart the DinD container for a fresh environment.

DinD Cleanup Script:

  • scripts/dind-cleanup.sh - Simple script to restart DinD container for fresh CI environment

To use the DinD cleanup script:

# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME

# Make the script executable
chmod +x scripts/dind-cleanup.sh

# Test DinD cleanup (dry run first)
./scripts/dind-cleanup.sh --dry-run

# Run DinD cleanup
./scripts/dind-cleanup.sh

Benefits of DinD cleanup:

  • Simple operation: Just restart the DinD container
  • Zero Harbor impact: Harbor registry is completely unaffected
  • Fresh environment: Every cleanup gives a completely clean state
  • Fast execution: No complex resource scanning needed
  • Reliable: No risk of accidentally removing Harbor resources

9.3 Test DinD Cleanup Script

# Test DinD cleanup with dry run first
./scripts/dind-cleanup.sh --dry-run

# Run the DinD cleanup script
./scripts/dind-cleanup.sh

# Verify DinD is working after cleanup
docker exec ci-cd-dind docker version
docker exec ci-cd-dind docker run --rm alpine:latest echo "DinD cleanup successful!"

Expected Output:

  • DinD cleanup script should run without errors
  • DinD container should be restarted with fresh environment
  • Docker commands should work inside DinD after cleanup
  • Harbor registry should remain completely unaffected

If something goes wrong:

  • Check script permissions: ls -la scripts/dind-cleanup.sh
  • Verify DinD container: docker ps | grep ci-cd-dind
  • Check DinD logs: docker logs ci-cd-dind
  • Run manually: bash -x scripts/dind-cleanup.sh

9.4 Set Up Automated DinD Cleanup

# Create a cron job to run DinD cleanup daily at 2 AM
(crontab -l 2>/dev/null; echo "0 2 * * * cd /opt/APP_NAME && ./scripts/dind-cleanup.sh >> /tmp/dind-cleanup.log 2>&1") | crontab -

# Verify the cron job was added
crontab -l

What this does:

  • Runs automatically: The DinD cleanup script runs every day at 2:00 AM
  • Frequency: Daily cleanup to prevent CI/CD resource buildup
  • Logging: All cleanup output is logged to /tmp/dind-cleanup.log
  • What it cleans: Restarts DinD container for fresh CI environment
  • Zero Harbor impact: Harbor registry operations are completely unaffected

9.5 Test Cleanup Script

# Create some test images to clean up
docker pull alpine:latest
docker pull nginx:latest
docker tag alpine:latest test-cleanup:latest
docker tag nginx:latest test-cleanup2:latest

# Test cleanup with dry run first
./scripts/cleanup.sh --type ci-cd --dry-run

# Run the cleanup script
./scripts/cleanup.sh --type ci-cd

# Verify cleanup worked
echo "Checking remaining images:"
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"

echo "Checking remaining volumes:"
docker volume ls

echo "Checking remaining networks:"
docker network ls

Expected Output:

  • Cleanup script should run without errors
  • Test images should be removed
  • System should report cleanup completion
  • Remaining images should be minimal (only actively used ones)

If something goes wrong:

  • Check script permissions: ls -la scripts/cleanup.sh
  • Verify Docker access: docker ps
  • Check registry access: cd /opt/APP_NAME/registry && docker compose ps
  • Run manually: bash -x scripts/cleanup.sh

9.6 Set Up Automated Cleanup

# Create a cron job to run cleanup daily at 3 AM using the repository script
(crontab -l 2>/dev/null; echo "0 3 * * * cd /opt/APP_NAME && ./scripts/cleanup.sh --type ci-cd >> /tmp/cleanup.log 2>&1") | crontab -

# Verify the cron job was added
crontab -l

What this does:

  • Runs automatically: The cleanup script runs every day at 3:00 AM
  • Frequency: Daily cleanup to prevent disk space issues
  • Logging: All cleanup output is logged to /tmp/cleanup.log
  • What it cleans: Unused Docker images, volumes, networks, and Harbor images

Step 10: Configure Firewall

sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 443/tcp  # Harbor registry (public read access)

Security Model:

  • Port 443 (Harbor): Public read access for public projects, authenticated write access
  • SSH: Restricted to your IP addresses
  • All other ports: Blocked

Step 11: Test CI/CD Setup

11.1 Test Docker Installation

docker --version
docker compose --version

11.2 Check Harbor Status

cd /opt/APP_NAME/registry
docker compose ps

11.3 Test Harbor Access

# Test Harbor API
curl -k https://localhost:8080/api/v2.0/health

# Test Harbor UI
curl -k -I https://localhost

11.4 Get Public Key for Production Server

cat ~/.ssh/id_ed25519.pub

Important: Copy this public key - you'll need it for the production server setup.


Part 2: Production Linode Setup

Step 12: Initial System Setup

12.1 Update the System

sudo apt update && sudo apt upgrade -y

12.2 Configure Timezone

# Configure timezone interactively
sudo dpkg-reconfigure tzdata

# Verify timezone setting
date

What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).

Expected output: After selecting your timezone, the date command should show the current date and time in your selected timezone.

12.3 Configure /etc/hosts

# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts

# Verify the configuration
cat /etc/hosts

What this does:

  • Adds localhost entries for both IPv4 and IPv6 addresses to /etc/hosts
  • Ensures proper localhost resolution for both IPv4 and IPv6

Important: Replace YOUR_PRODUCTION_IPV4_ADDRESS and YOUR_PRODUCTION_IPV6_ADDRESS with the actual IPv4 and IPv6 addresses of your Production Linode obtained from your Linode dashboard.

Expected output: The /etc/hosts file should show entries for 127.0.0.1, ::1, and your Linode's actual IP addresses all mapping to localhost.

12.4 Install Essential Packages

sudo apt install -y \
    curl \
    wget \
    git \
    ca-certificates \
    apt-transport-https \
    software-properties-common \
    ufw \
    fail2ban \
    htop \
    nginx \
    certbot \
    python3-certbot-nginx

Step 13: Create Users

13.1 Create the SERVICE_USER User

# Create dedicated group for the service account
sudo groupadd -r SERVICE_USER

# Create service account user with dedicated group
sudo useradd -r -g SERVICE_USER -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd

13.2 Create the DEPLOY_USER User

# Create deployment user
sudo useradd -m -s /bin/bash DEPLOY_USER
sudo usermod -aG sudo DEPLOY_USER
echo "DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd

13.3 Verify Users

sudo su - SERVICE_USER
whoami
pwd
exit

sudo su - DEPLOY_USER
whoami
pwd
exit

Step 14: Install Docker

14.1 Add Docker Repository

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update

14.2 Install Docker Packages

sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

14.3 Configure Docker for Service Account

sudo usermod -aG docker SERVICE_USER

Step 15: Install Docker Compose

sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Step 16: Configure Security

16.1 Configure Firewall

sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 3000/tcp
sudo ufw allow 3001/tcp

16.2 Configure Fail2ban

sudo systemctl enable fail2ban
sudo systemctl start fail2ban

Step 17: Create Application Directory

17.1 Create Directory Structure

sudo mkdir -p /opt/APP_NAME
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME

Note: Replace APP_NAME with your actual application name. This directory name can be controlled via the APP_NAME secret in your Forgejo repository settings. If you set the APP_NAME secret to myapp, the deployment directory will be /opt/myapp.

17.2 Create SSL Directory (Optional - for domain users)

sudo mkdir -p /opt/APP_NAME/nginx/ssl
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME/nginx/ssl

Step 18: Clone Repository and Set Up Application Files

18.1 Switch to SERVICE_USER User

sudo su - SERVICE_USER

18.2 Clone Repository

cd /opt/APP_NAME
git clone https://your-forgejo-instance/your-username/APP_NAME.git .

Important: The repository includes a pre-configured nginx/nginx.conf file that handles both SSL and non-SSL scenarios, with proper security headers, rate limiting, and CORS configuration. This file will be automatically used by the Docker Compose setup.

Important: The repository also includes a pre-configured .forgejo/workflows/ci.yml file that handles the complete CI/CD pipeline including testing, building, and deployment. This workflow is already set up to work with the private registry and production deployment.

Note: Replace your-forgejo-instance and your-username/APP_NAME with your actual Forgejo instance URL and repository path.

18.3 Create Environment File

The repository doesn't include a .env.example file for security reasons. The CI/CD pipeline will create the .env file dynamically during deployment. However, for manual testing or initial setup, you can create a basic .env file:

cat > /opt/APP_NAME/.env << 'EOF'
# Production Environment Variables
POSTGRES_PASSWORD=your_secure_password_here
REGISTRY=YOUR_CI_CD_IP:8080
IMAGE_NAME=APP_NAME
IMAGE_TAG=latest

# Database Configuration
POSTGRES_DB=sharenet
POSTGRES_USER=sharenet
DATABASE_URL=postgresql://sharenet:your_secure_password_here@postgres:5432/sharenet

# Application Configuration
NODE_ENV=production
RUST_LOG=info
RUST_BACKTRACE=1
EOF

Important: Replace YOUR_CI_CD_IP with your actual CI/CD Linode IP address and your_secure_password_here with a strong password.

18.4 Configure Docker for Harbor Access

# Add the CI/CD Harbor registry to Docker's insecure registries
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json << EOF
{
  "insecure-registries": ["YOUR_CI_CD_IP:8080"]
}
EOF

# Restart Docker to apply changes
sudo systemctl restart docker

Important: Replace YOUR_CI_CD_IP with your actual CI/CD Linode IP address.

Step 19: Set Up SSH Key Authentication

19.1 Add CI/CD Public Key

# Create .ssh directory for SERVICE_USER
mkdir -p ~/.ssh
chmod 700 ~/.ssh

# Add the CI/CD public key (copy from CI/CD Linode)
echo "YOUR_CI_CD_PUBLIC_KEY" >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

Important: Replace YOUR_CI_CD_PUBLIC_KEY with the public key from the CI/CD Linode (the output from cat ~/.ssh/id_ed25519.pub on the CI/CD Linode).

19.2 Test SSH Connection

From the CI/CD Linode, test the SSH connection:

ssh production

Expected output: You should be able to SSH to the production server without a password prompt.

Step 20: Test Production Setup

20.1 Test Docker Installation

docker --version
docker compose --version

20.2 Test Harbor Access

# Test pulling an image from the CI/CD Harbor registry
docker pull YOUR_CI_CD_IP:8080/public/backend:latest

Important: Replace YOUR_CI_CD_IP with your actual CI/CD Linode IP address.

20.3 Test Application Deployment

cd /opt/APP_NAME
docker compose up -d

20.4 Verify Application Status

docker compose ps
curl http://localhost:3000
curl http://localhost:3001/health

Expected Output:

  • All containers should be running
  • Frontend should be accessible on port 3000
  • Backend health check should return 200 OK

Part 3: Final Configuration and Testing

Step 21: Configure Forgejo Repository Secrets

21.1 Required Repository Secrets

Go to your Forgejo repository and add these secrets in Settings → Secrets and Variables → Actions:

Required Secrets:

  • CI_CD_IP: Your CI/CD Linode IP address
  • PRODUCTION_IP: Your Production Linode IP address
  • DEPLOY_USER: The deployment user name (e.g., deploy, ci, admin)
  • SERVICE_USER: The service user name (e.g., appuser, service, app)
  • APP_NAME: Your application name (e.g., sharenet, myapp)
  • POSTGRES_PASSWORD: A strong password for the PostgreSQL database

Optional Secrets (for domain users):

  • DOMAIN: Your domain name (e.g., example.com)
  • EMAIL: Your email for SSL certificate notifications

21.2 Configure Forgejo Actions Runner

21.2.1 Get Runner Token
  1. Go to your Forgejo repository
  2. Navigate to Settings → Actions → Runners
  3. Click "New runner"
  4. Copy the registration token
21.2.2 Configure Runner
# Switch to DEPLOY_USER on CI/CD Linode
sudo su - DEPLOY_USER

# Get the registration token from your Forgejo repository
# Go to Settings → Actions → Runners → New runner
# Copy the registration token

# Configure the runner
forgejo-runner register \
  --instance https://your-forgejo-instance \
  --token YOUR_TOKEN \
  --name "ci-cd-dind-runner" \
  --labels "ubuntu-latest,docker,dind" \
  --no-interactive
21.2.3 Start Runner
sudo systemctl start forgejo-runner.service
sudo systemctl status forgejo-runner.service
21.2.4 Test Runner Configuration
# Check if the runner is running
sudo systemctl status forgejo-runner.service

# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager

# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-cd-dind-runner" with status "Online"

Expected Output:

  • systemctl status should show "active (running)"
  • Forgejo web interface should show the runner as online

If something goes wrong:

  • Check logs: sudo journalctl -u forgejo-runner.service -f
  • Verify token: Make sure the registration token is correct
  • Check network: Ensure the runner can reach your Forgejo instance
  • Restart service: sudo systemctl restart forgejo-runner.service

Step 22: Set Up Monitoring and Cleanup

22.1 Monitoring Script

Important: The repository includes a pre-configured monitoring script in the scripts/ directory that can be used for both CI/CD and production monitoring.

Repository Script:

  • scripts/monitor.sh - Comprehensive monitoring script with support for both CI/CD and production environments

To use the repository monitoring script:

# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME

# Make the script executable
chmod +x scripts/monitor.sh

# Test CI/CD monitoring
./scripts/monitor.sh --type ci-cd

# Test production monitoring (if you have a production setup)
./scripts/monitor.sh --type production

Note: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.

22.2 DinD Cleanup Script

Important: With the DinD setup, CI/CD operations are isolated in the DinD container. This means we can use a much simpler cleanup approach - just restart the DinD container for a fresh environment.

DinD Cleanup Script:

  • scripts/dind-cleanup.sh - Simple script to restart DinD container for fresh CI environment

To use the DinD cleanup script:

# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME

# Make the script executable
chmod +x scripts/dind-cleanup.sh

# Test DinD cleanup (dry run first)
./scripts/dind-cleanup.sh --dry-run

# Run DinD cleanup
./scripts/dind-cleanup.sh

Benefits of DinD cleanup:

  • Simple operation: Just restart the DinD container
  • Zero Harbor impact: Harbor registry is completely unaffected
  • Fresh environment: Every cleanup gives a completely clean state
  • Fast execution: No complex resource scanning needed
  • Reliable: No risk of accidentally removing Harbor resources

22.3 Test DinD Cleanup Script

# Test DinD cleanup with dry run first
./scripts/dind-cleanup.sh --dry-run

# Run the DinD cleanup script
./scripts/dind-cleanup.sh

# Verify DinD is working after cleanup
docker exec ci-cd-dind docker version
docker exec ci-cd-dind docker run --rm alpine:latest echo "DinD cleanup successful!"

Expected Output:

  • DinD cleanup script should run without errors
  • DinD container should be restarted with fresh environment
  • Docker commands should work inside DinD after cleanup
  • Harbor registry should remain completely unaffected

If something goes wrong:

  • Check script permissions: ls -la scripts/dind-cleanup.sh
  • Verify DinD container: docker ps | grep ci-cd-dind
  • Check DinD logs: docker logs ci-cd-dind
  • Run manually: bash -x scripts/dind-cleanup.sh

22.4 Set Up Automated DinD Cleanup

# Create a cron job to run DinD cleanup daily at 2 AM
(crontab -l 2>/dev/null; echo "0 2 * * * cd /opt/APP_NAME && ./scripts/dind-cleanup.sh >> /tmp/dind-cleanup.log 2>&1") | crontab -

# Verify the cron job was added
crontab -l

What this does:

  • Runs automatically: The DinD cleanup script runs every day at 2:00 AM
  • Frequency: Daily cleanup to prevent CI/CD resource buildup
  • Logging: All cleanup output is logged to /tmp/dind-cleanup.log
  • What it cleans: Restarts DinD container for fresh CI environment
  • Zero Harbor impact: Harbor registry operations are completely unaffected

Step 23: Test Complete Pipeline

23.1 Trigger a Test Build

  1. Make a small change to your repository (e.g., update a comment or add a test file)
  2. Commit and push the changes to trigger the CI/CD pipeline
  3. Monitor the build in your Forgejo repository → Actions tab

23.2 Verify Pipeline Steps

The pipeline should execute these steps in order:

  1. Checkout: Clone the repository
  2. Setup DinD: Configure Docker-in-Docker environment
  3. Test Backend: Run backend tests in isolated environment
  4. Test Frontend: Run frontend tests in isolated environment
  5. Build Backend: Build backend Docker image in DinD
  6. Build Frontend: Build frontend Docker image in DinD
  7. Push to Registry: Push images to Harbor registry from DinD
  8. Deploy to Production: Deploy to production server

23.3 Check Harbor

# On CI/CD Linode
cd /opt/APP_NAME/registry

# Check if new images were pushed
curl -k https://localhost:8080/v2/_catalog

# Check specific repository tags
curl -k https://localhost:8080/v2/public/backend/tags/list
curl -k https://localhost:8080/v2/public/frontend/tags/list

23.4 Verify Production Deployment

# On Production Linode
cd /opt/APP_NAME

# Check if containers are running with new images
docker compose ps

# Check application health
curl http://localhost:3000
curl http://localhost:3001/health

# Check container logs for any errors
docker compose logs backend
docker compose logs frontend

23.5 Test Application Functionality

  1. Frontend: Visit your production URL (IP or domain)
  2. Backend API: Test API endpoints
  3. Database: Verify database connections
  4. Logs: Check for any errors in application logs

Step 24: Set Up SSL/TLS (Optional - Domain Users)

24.1 Install SSL Certificate

If you have a domain pointing to your Production Linode:

# On Production Linode
sudo certbot --nginx -d your-domain.com

# Verify certificate
sudo certbot certificates

24.2 Configure Auto-Renewal

# Test auto-renewal
sudo certbot renew --dry-run

# Add to crontab for automatic renewal
sudo crontab -e
# Add this line:
# 0 12 * * * /usr/bin/certbot renew --quiet

Step 25: Final Verification

25.1 Security Check

# Check firewall status
sudo ufw status

# Check fail2ban status
sudo systemctl status fail2ban

# Check SSH access (should be key-based only)
sudo grep "PasswordAuthentication" /etc/ssh/sshd_config

25.2 Performance Check

# Check system resources
htop

# Check disk usage
df -h

# Check Docker disk usage
docker system df

25.3 Backup Verification

# Test backup script
cd /opt/APP_NAME
./scripts/backup.sh --dry-run

# Run actual backup
./scripts/backup.sh

Step 26: Documentation and Maintenance

26.1 Update Documentation

  1. Update README.md with deployment information
  2. Document environment variables and their purposes
  3. Create troubleshooting guide for common issues
  4. Document backup and restore procedures

26.2 Set Up Monitoring Alerts

# Set up monitoring cron job
(crontab -l 2>/dev/null; echo "*/5 * * * * cd /opt/APP_NAME && ./scripts/monitor.sh --type production >> /tmp/monitor.log 2>&1") | crontab -

# Check monitoring logs
tail -f /tmp/monitor.log

26.3 Regular Maintenance Tasks

Daily:

  • Check application logs for errors
  • Monitor system resources
  • Verify backup completion

Weekly:

  • Review security logs
  • Update system packages
  • Test backup restoration

Monthly:

  • Review and rotate logs
  • Update SSL certificates
  • Review and update documentation

🎉 Congratulations!

You have successfully set up a complete CI/CD pipeline with:

  • Automated testing on every code push in isolated DinD environment
  • Docker image building and Harbor registry storage
  • Automated deployment to production
  • Health monitoring and logging
  • Backup and cleanup automation
  • Security hardening with proper user separation
  • SSL/TLS support for production (optional)
  • Zero resource contention between CI/CD and Harbor

Your application is now ready for continuous deployment with proper security, monitoring, and maintenance procedures in place!