sharenet/CI_CD_PIPELINE_SETUP_GUIDE.md
continuist 1c4ac1fffb
Some checks failed
CI/CD Pipeline with Secure Ephemeral PiP / test-backend (push) Failing after 1m4s
CI/CD Pipeline with Secure Ephemeral PiP / test-frontend (push) Has been skipped
CI/CD Pipeline with Secure Ephemeral PiP / build-backend (push) Has been skipped
CI/CD Pipeline with Secure Ephemeral PiP / build-frontend (push) Has been skipped
CI/CD Pipeline with Secure Ephemeral PiP / deploy-prod (push) Has been skipped
Fix bugs preventing containers in PiP from reaching Forgejo instance
2025-09-08 23:15:05 -04:00

84 KiB
Raw Blame History

CI/CD Pipeline Setup Guide

This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments using Docker-in-Docker (DinD) for isolated CI operations.

Architecture Overview

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Forgejo Host  │    │   CI/CD Linode  │    │ Production Linode│
│   (Repository)  │    │ (Actions Runner)│    │ (Podman Deploy) │
│                 │    │ + Forgejo Registry│  │                 │
│                 │    │ + PiP Container │    │                 │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         │                       │                       │
         └─────────── Push ──────┼───────────────────────┘
                                 │
                                 └─── Deploy ────────────┘

Pipeline Flow

  1. Code Push: Developer pushes code to Forgejo repository
  2. Automated Testing: CI/CD Linode runs tests in isolated DinD environment
  3. Image Building: If tests pass, Docker images are built within DinD
  4. Registry Push: Images are pushed to Forgejo Container Registry from DinD
  5. Production Deployment: Production Linode pulls images and deploys
  6. Health Check: Application is verified and accessible

Key Benefits of PiP (Podman-in-Podman) Approach

For Rust Testing:

  • Fresh environment every test run
  • Parallel execution capability
  • Isolated dependencies - no test pollution
  • Fast cleanup - just restart PiP container

For CI/CD Operations:

  • Zero resource contention with Forgejo Container Registry
  • Simple cleanup - one-line container restart
  • Perfect isolation - CI/CD can't affect Forgejo Container Registry
  • Consistent environment - same setup every time

For Maintenance:

  • Reduced complexity - no complex cleanup scripts
  • Easy debugging - isolated environment
  • Reliable operations - no interference between services

Prerequisites

  • Two Ubuntu 24.04 LTS Linodes with root access
  • Basic familiarity with Linux commands and SSH
  • Forgejo repository with Actions enabled

Quick Start

  1. Set up CI/CD Linode (Steps 0-8)
  2. Set up Production Linode (Steps 10-16)
  3. Set up Forgejo repository secrets (Step 17)
  4. Test the complete pipeline (Step 18)

What's Included

CI/CD Linode Features

  • Forgejo Actions runner for automated builds
  • Ephemeral Podman-in-Podman (PiP) containers for isolated CI operations
  • Secure setup scripts (secure_pip_setup.sh, pip_ready.sh) for automated PiP management
  • Forgejo Container Registry for secure image storage
  • FHS-compliant directory structure for data, certificates, and logs
  • Secure registry access via Forgejo authentication
  • Automatic HTTPS with nginx reverse proxy
  • Secure SSH communication with production
  • Ephemeral cleanup - fresh PiP container per CI run
  • Systemd user manager for robust rootless Podman services

Production Linode Features

  • Rootless Podman deployment with maximum security hardening under PROD_SERVICE_USER only
  • Host nginx reverse proxy with SSL termination (ports 80/443) - containers serve internal ports (8080/8443) only
  • Zero host port exposure for backend/frontend/postgres - UNIX sockets only for internal communication
  • Systemd user services for automatic restart and persistence via systemd user manager
  • No Podman TCP sockets - UNIX socket communication only (no ports 2375/2376 exposed)
  • Container hardening: readOnlyRootFilesystem, no privilege escalation, capabilities.drop=ALL
  • Secrets management: Kubernetes Secrets or mounted env-files for secure credential handling
  • IPv4-only networking with UFW firewall allowing only ports 22/80/443
  • Artifact-based deployment from OCI archives - no registry access on production
  • Local image references only (localhost/backend:deployed) - prevents external registry dependencies
  • Automatic health monitoring with liveness and readiness probes for all services
  • Resource limits and constraints to prevent resource exhaustion attacks
  • Fail2ban protection for SSH and application-level intrusion prevention

Pipeline Features

  • Ephemeral testing - fresh PiP container per CI run with maximum security
  • Comprehensive integration testing with real PostgreSQL database
  • Automated image building and push to Forgejo Container Registry from PiP
  • Automated deployment to production
  • Rollback capability with image versioning
  • Health monitoring and logging with readiness probes
  • Zero resource contention between CI/CD and Forgejo Container Registry
  • Robust rootless services via systemd user manager
  • Maximum security - no port exposure, UNIX sockets only, least privilege

Security Model and User Separation

This setup uses a principle of least privilege approach with separate users for different purposes:

User Roles

  1. Root User

    • Purpose: Initial system setup only
    • SSH Access: Disabled after setup
    • Privileges: Full system access (used only during initial configuration)
  2. Deployment User (CI_DEPLOY_USER on CI Linode, PROD_DEPLOY_USER on Production Linode)

    • Purpose: SSH access, deployment tasks, system administration
    • SSH Access: Enabled with key-based authentication
    • Privileges: Sudo access for deployment and administrative tasks
    • Example: ci-deploy / prod-deploy
  3. Service Account (CI_SERVICE_USER on CI Linode, PROD_SERVICE_USER on Production Linode)

    • Purpose: Running application services (Docker containers, databases)
    • SSH Access: None (no login shell)
    • Privileges: No sudo access, minimal system access
    • Example: ci-service / prod-service

Security Benefits

  • No root SSH access: Eliminates the most common attack vector
  • Principle of least privilege: Each user has only the access they need
  • Separation of concerns: Deployment tasks vs. service execution are separate
  • Audit trail: Clear distinction between deployment and service activities
  • Reduced attack surface: Service account has minimal privileges

File Permissions

  • Application files: Owned by CI_SERVICE_USER for security (CI Linode) / PROD_SERVICE_USER for security (Production Linode)
  • Docker operations: Run by CI_SERVICE_USER with Docker group access (CI Linode) / PROD_SERVICE_USER with Docker group access (Production Linode)
  • Service execution: Run by CI_SERVICE_USER (no sudo needed) / PROD_SERVICE_USER (no sudo needed)

Prerequisites and Initial Setup

What's Already Done (Assumptions)

This guide assumes you have already:

  1. Created two Ubuntu 24.04 LTS Linodes with root access
  2. Set root passwords for both Linodes
  3. Have SSH client installed on your local machine
  4. Have Forgejo repository with Actions enabled

Step 0: Initial SSH Access and Verification

Before proceeding with the setup, you need to establish initial SSH access to both Linodes.

0.1 Get Your Linode IP Addresses

From your Linode dashboard, note the IP addresses for:

  • CI/CD Linode: YOUR_CI_CD_IP (IP address only, no domain needed)
  • Production Linode: YOUR_PRODUCTION_IP (IP address for SSH and web access)

0.2 Test Initial SSH Access

Test SSH access to both Linodes:

# Test CI/CD Linode (IP address only)
ssh root@YOUR_CI_CD_IP

# Test Production Linode (IP address only)
ssh root@YOUR_PRODUCTION_IP

Expected output: SSH login prompt asking for root password.

If something goes wrong:

  • Verify the IP addresses are correct
  • Check that SSH is enabled on the Linodes
  • Ensure your local machine can reach the Linodes (no firewall blocking)

0.3 Choose Your Names

Before proceeding, decide on:

  1. CI Service Account Name: Choose a username for the CI service account (e.g., ci-service)

    • Replace CI_SERVICE_USER in this guide with your chosen name
    • This account runs the CI pipeline and Docker operations on the CI Linode
  2. CI Deployment User Name: Choose a username for CI deployment tasks (e.g., ci-deploy)

    • Replace CI_DEPLOY_USER in this guide with your chosen name
    • This account has sudo privileges for deployment tasks
  3. Application Name: Choose a name for your application (e.g., sharenet)

    • Replace APP_NAME in this guide with your chosen name

Example:

  • If you choose ci-service as CI service account, ci-deploy as CI deployment user, and sharenet as application name:
    • Replace all CI_SERVICE_USER with ci-service
    • Replace all CI_DEPLOY_USER with ci-deploy
    • Replace all APP_NAME with sharenet

Security Model:

  • CI Service Account (CI_SERVICE_USER): Runs CI pipeline and Docker operations, no sudo access
  • CI Deployment User (CI_DEPLOY_USER): Handles SSH communication and orchestration, has sudo access
  • Root: Only used for initial setup, then disabled for SSH access

0.4 Set Up SSH Key Authentication for Local Development

Important: This step should be done on both Linodes to enable secure SSH access from your local development machine.

0.4.1 Generate SSH Key on Your Local Machine

On your local development machine, generate an SSH key pair:

# Generate SSH key pair (if you don't already have one)
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""

# Or use existing key if you have one
ls ~/.ssh/id_ed25519.pub
0.4.2 Add Your Public Key to Both Linodes

Copy your public key to both Linodes:

# Copy your public key to CI/CD Linode
ssh-copy-id root@YOUR_CI_CD_IP

# Copy your public key to Production Linode
ssh-copy-id root@YOUR_PRODUCTION_IP

Alternative method (if ssh-copy-id doesn't work):

# Copy your public key content
cat ~/.ssh/id_ed25519.pub

# Then manually add to each server
ssh root@YOUR_CI_CD_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys

ssh root@YOUR_PRODUCTION_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
0.4.3 Test SSH Key Authentication

Test that you can access both servers without passwords:

# Test CI/CD Linode
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'

# Test Production Linode
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'

Expected output: The echo messages should appear without password prompts.

0.4.4 Create Deployment Users

On both Linodes, create the deployment user with sudo privileges:

For CI Linode:

# Create CI deployment user
sudo useradd -m -s /bin/bash CI_DEPLOY_USER
sudo usermod -aG sudo CI_DEPLOY_USER

# Set a secure password (for emergency access only)
echo "CI_DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd

# Copy your SSH key to the CI deployment user
sudo mkdir -p /home/CI_DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/CI_DEPLOY_USER/.ssh/
sudo chown -R CI_DEPLOY_USER:CI_DEPLOY_USER /home/CI_DEPLOY_USER/.ssh
sudo chmod 700 /home/CI_DEPLOY_USER/.ssh
sudo chmod 600 /home/CI_DEPLOY_USER/.ssh/authorized_keys

# Configure sudo to use SSH key authentication (most secure)
echo "CI_DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/CI_DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/CI_DEPLOY_USER

For Production Linode:

# Create production deployment user
sudo useradd -m -s /bin/bash PROD_DEPLOY_USER
sudo usermod -aG sudo PROD_DEPLOY_USER

# Set a secure password (for emergency access only)
echo "PROD_DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd

# Copy your SSH key to the production deployment user
sudo mkdir -p /home/PROD_DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/PROD_DEPLOY_USER/.ssh/
sudo chown -R PROD_DEPLOY_USER:PROD_DEPLOY_USER /home/PROD_DEPLOY_USER/.ssh
sudo chmod 700 /home/PROD_DEPLOY_USER/.ssh
sudo chmod 600 /home/PROD_DEPLOY_USER/.ssh/authorized_keys

# Configure sudo to use SSH key authentication (most secure)
echo "PROD_DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/PROD_DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/PROD_DEPLOY_USER

Security Note: This configuration allows the deployment users to use sudo without a password, which is more secure for CI/CD automation since there are no passwords to store or expose. The random password is set for emergency console access only.

0.4.5 Test Sudo Access

Test that the deployment users can use sudo without password prompts:

# Test CI deployment user sudo access
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'sudo whoami'

# Test production deployment user sudo access
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'sudo whoami'

Expected output: Both commands should return root without prompting for a password.

0.4.6 Test Deployment User Access

Test that you can access both servers as the deployment users:

# Test CI/CD Linode
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'echo "CI deployment user SSH access works for CI/CD"'

# Test Production Linode
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Production deployment user SSH access works for Production"'

Expected output: The echo messages should appear without password prompts.

0.4.7 Create SSH Config for Easy Access

On your local machine, create an SSH config file for easy access:

# Create SSH config
cat > ~/.ssh/config << 'EOF'
Host ci-cd-dev
    HostName YOUR_CI_CD_IP
    User CI_DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no

Host production-dev
    HostName YOUR_PRODUCTION_IP
    User PROD_DEPLOY_USER
    IdentityFile ~/.ssh/id_ed25519
    StrictHostKeyChecking no
EOF

chmod 600 ~/.ssh/config

Now you can access servers easily:

ssh ci-cd-dev
ssh production-dev
0.4.8 Secure SSH Configuration

Critical Security Step: After setting up SSH key authentication, you must disable password authentication and root login to secure your servers.

For Both CI/CD and Production Linodes:

Step 1: Edit SSH Configuration File

# Open the SSH configuration file using nano
sudo nano /etc/ssh/sshd_config

Step 2: Disallow Root Logins

Find the line that says:

#PermitRootLogin prohibit-password

Change it to:

PermitRootLogin no

Step 3: Disable Password Authentication

Find the line that says:

#PasswordAuthentication yes

Change it to:

PasswordAuthentication no

Step 4: Configure Protocol Family (Optional)

If you only need IPv4 connections, find or add:

#AddressFamily any

Change it to:

AddressFamily inet

Step 5: Save and Exit

  • Press Ctrl + X to exit
  • Press Y to confirm saving
  • Press Enter to confirm the filename

Step 6: Test SSH Configuration

# Test the SSH configuration for syntax errors
sudo sshd -t

Step 7: Restart SSH Service

For Ubuntu 24.04 LTS (socket-based activation):

sudo systemctl restart ssh

For other distributions:

sudo systemctl restart sshd

Step 8: Verify SSH Access

IMPORTANT: Test SSH access from a new terminal window before closing your current session:

# Test CI/CD Linode
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'echo "SSH configuration test successful"'

# Test Production Linode
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "SSH configuration test successful"'

What these changes do:

  • PermitRootLogin no: Completely disables root SSH access
  • PasswordAuthentication no: Disables password-based authentication
  • AddressFamily inet: Listens only on IPv4 (optional, for additional security)

Security Benefits:

  • No root access: Eliminates the most common attack vector
  • Key-only authentication: Prevents brute force password attacks
  • Protocol restriction: Limits SSH to IPv4 only (if configured)

Emergency Access:

If you lose SSH access, you can still access the server through:

  • Linode Console: Use the Linode dashboard's console access
  • Emergency mode: Boot into single-user mode if needed

Verification Commands:

# Check SSH configuration
sudo grep -E "(PermitRootLogin|PasswordAuthentication|AddressFamily)" /etc/ssh/sshd_config

# Check SSH service status
sudo systemctl status ssh

# Check SSH logs for any issues
sudo journalctl -u ssh -f

# Test SSH access from a new session
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'whoami'
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'whoami'

Expected Output:

  • PermitRootLogin no
  • PasswordAuthentication no
  • AddressFamily inet (if configured)
  • SSH service should be "active (running)"
  • Test commands should return the deployment user names

Important Security Notes:

  1. Test before closing: Always test SSH access from a new session before closing your current SSH connection
  2. Keep backup: You can restore the original configuration if needed
  3. Monitor logs: Check /var/log/auth.log for SSH activity and potential attacks
  4. Regular updates: Keep SSH and system packages updated for security patches

Alternative: Manual Configuration with Backup

If you prefer to manually edit the file with a backup:

# Create backup
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup

# Edit the file
sudo nano /etc/ssh/sshd_config

# Test configuration
sudo sshd -t

# Restart service
sudo systemctl restart ssh

Part 1: CI/CD Linode Setup

Step 1: Initial System Setup

1.1 Update the System

sudo apt update && sudo apt upgrade -y

What this does: Updates package lists and upgrades all installed packages to their latest versions.

Expected output: A list of packages being updated, followed by completion messages.

1.2 Configure Timezone

# Configure timezone interactively
sudo dpkg-reconfigure tzdata

# Verify timezone setting
date

What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).

Expected output: After selecting your timezone, the date command should show the current date and time in your selected timezone.

1.3 Configure /etc/hosts

# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts

# Verify the configuration
cat /etc/hosts

What this does:

  • Adds localhost entries for both IPv4 and IPv6 addresses to /etc/hosts
  • Ensures proper localhost resolution for both IPv4 and IPv6

Important: Replace YOUR_CI_CD_IPV4_ADDRESS and YOUR_CI_CD_IPV6_ADDRESS with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.

Expected output: The /etc/hosts file should show entries for 127.0.0.1, ::1, and your Linode's actual IP addresses all mapping to localhost.

1.4 Install Essential Packages

sudo apt install -y \
    curl \
    wget \
    git \
    build-essential \
    pkg-config \
    libssl-dev \
    ca-certificates \
    apt-transport-https \
    software-properties-common \
    apache2-utils

1.5 Configure Firewall FIRST - Before Any Container Operations

SECURITY FIRST: Configure firewall BEFORE installing Podman to prevent any accidental exposure.

# Configure secure firewall defaults
sudo ufw --force reset
sudo ufw default deny incoming
sudo ufw default allow outgoing

# ONLY allow SSH (port 22)
sudo ufw allow ssh

# Enable firewall immediately
sudo ufw --force enable

# Verify firewall configuration
sudo ufw status verbose

# Expected output:
# Status: active
# Logging: on (low)
# Default: deny (incoming), allow (outgoing), disabled (routed)
# New profiles: skip
# 
# To                         Action      From
# --                         ------      ----
# 22/tcp                     ALLOW IN    Anywhere

# Test that essential services still work
curl -I https://docker.io
curl -I https://quay.io

1.6 Install Podman

Note: Podman is required for container operations and will be installed AFTER firewall is configured.

# Install Podman and related tools
sudo apt install -y podman

# Verify installation
podman --version

# Configure Podman for rootless operation (optional but recommended)
echo 'kernel.unprivileged_userns_clone=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

What this does: Installs Podman and configures it for rootless operation, which is needed for the CI pipeline and Forgejo Container Registry operations.

Step 2: Create Users

2.1 Create CI Service Account

# Create dedicated group for the CI service account
sudo groupadd -r CI_SERVICE_USER

# Create CI service account user with dedicated group
sudo useradd -r -g CI_SERVICE_USER -s /bin/bash -m -d /home/CI_SERVICE_USER CI_SERVICE_USER
echo "CI_SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd

2.2 Configure Rootless Podman for CI_SERVICE_USER

# Configure subuid/subgid for CI_SERVICE_USER (required for rootless Podman)
sudo usermod --add-subuids 100000-165535 CI_SERVICE_USER
sudo usermod --add-subgids 100000-165535 CI_SERVICE_USER

# Verify the configuration
grep CI_SERVICE_USER /etc/subuid
grep CI_SERVICE_USER /etc/subgid

2.3 Verify Users

sudo su - CI_SERVICE_USER
whoami
pwd
exit

sudo su - CI_DEPLOY_USER
whoami
pwd
exit

Step 3: Clone Repository for Registry Configuration

3.1 Clone Repository

# Switch to CI_DEPLOY_USER (who has sudo access)
sudo su - CI_DEPLOY_USER

# Create application directory and clone repository
sudo mkdir -p /opt/APP_NAME
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/APP_NAME
cd /opt
sudo git clone https://your-forgejo-instance/your-username/APP_NAME.git
sudo chown -R CI_SERVICE_USER:CI_SERVICE_USER APP_NAME/

# Verify the application directory structure
ls -la /opt/APP_NAME/

Important: Replace your-forgejo-instance, your-username, and APP_NAME with your actual Forgejo instance URL, username, and application name.

What this does:

  • CI_DEPLOY_USER creates the directory structure and clones the repository
  • CI_SERVICE_USER owns all the files for security
  • Application configuration files are now available at /opt/APP_NAME/

Step 4: Configure Forgejo Container Registry Access

Note: This project uses Forgejo's built-in Container Registry instead of a separate Docker Registry. The CI/CD pipeline is already configured to use Forgejo Container Registry.

Configuration: Registry access is handled through:

  • Authentication: Forgejo Personal Access Tokens (PAT)
  • Registry URL: Your Forgejo instance's registry endpoint
  • Security: Built-in Forgejo authentication and authorization

Quick Reference: The Forgejo Container Registry will be accessible at:

  • Registry URL: ${REGISTRY_HOST} (configured in secrets)
  • Authentication: PAT with write:packages scope for pushes
  • Public Access: Available for pulls from public repositories

Step 5: Install Forgejo Actions Runner

5.1 Download Runner

Important: Run this step as the CI_DEPLOY_USER (not root or CI_SERVICE_USER). The CI_DEPLOY_USER handles deployment tasks including downloading and installing the Forgejo runner.

cd ~

# Get the latest version dynamically
LATEST_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases | jq -r '.[0].tag_name')
echo "Downloading Forgejo runner version: $LATEST_VERSION"

# Download the latest runner
wget https://code.forgejo.org/forgejo/runner/releases/download/${LATEST_VERSION}/forgejo-runner-${LATEST_VERSION#v}-linux-amd64
chmod +x forgejo-runner-${LATEST_VERSION#v}-linux-amd64
sudo mv forgejo-runner-${LATEST_VERSION#v}-linux-amd64 /usr/bin/forgejo-runner

Alternative: Pin to Specific Version (Recommended for Production)

If you prefer to pin to a specific version for stability, replace the dynamic download with:

cd ~
VERSION="v6.3.1"  # Pin to specific version
wget https://code.forgejo.org/forgejo/runner/releases/download/${VERSION}/forgejo-runner-${VERSION#v}-linux-amd64
chmod +x forgejo-runner-${VERSION#v}-linux-amd64
sudo mv forgejo-runner-${VERSION#v}-linux-amd64 /usr/bin/forgejo-runner

What this does:

  • Dynamic approach: Downloads the latest stable Forgejo Actions runner
  • Version pinning: Allows you to specify a known-good version for production
  • System installation: Installs the binary system-wide in /usr/bin/ for proper Linux structure
  • Makes the binary executable and available system-wide

Production Recommendation: Use version pinning in production environments to ensure consistency and avoid unexpected breaking changes.

5.2 Register Runner

Important: The runner must be registered with your Forgejo instance before it can start. This creates the required .runner configuration file.

Step 1: Get Permissions to Create Repository-level Runners

To create a repository-level runner, you need Repository Admin or Owner permissions. Here's how to check and manage permissions:

Check Your Current Permissions:

  1. Go to your repository: https://your-forgejo-instance/your-username/your-repo
  2. Look for the Settings tab in the repository navigation
  3. If you see Actions in the left sidebar under Settings, you have the right permissions
  4. If you don't see Settings or Actions, you don't have admin access

Add Repository Admin (Repository Owner Only):

If you're the repository owner and need to give someone else admin access:

  1. Go to Repository Settings:

    • Navigate to your repository
    • Click Settings tab
    • Click Collaborators in the left sidebar
  2. Add Collaborator:

    • Click Add Collaborator button
    • Enter the username or email of the person you want to add
    • Select Admin from the role dropdown
    • Click Add Collaborator
  3. Alternative: Manage Team Access (for Organizations):

    • Go to Settings → Collaborators
    • Click Manage Team Access
    • Add the team with Admin permissions

Repository Roles and Permissions:

Role Can Create Runners Can Manage Repository Can Push Code
Owner Yes Yes Yes
Admin Yes Yes Yes
Write No No Yes
Read No No No

If You Don't Have Permissions:

Option 1: Ask Repository Owner

  • Contact the person who owns the repository
  • Ask them to create the runner and share the registration token with you

Option 2: Use Organization/User Runner

  • If you have access to organization settings, create an org-level runner
  • Or create a user-level runner if you own other repositories

Option 3: Site Admin Help

  • Contact your Forgejo instance administrator to create a site-level runner

Site Administrator: Setting Repository Admin (Forgejo Instance Admin)

To add an existing user as an Administrator of an existing repository in Forgejo, follow these steps:

  1. Go to the repository: Navigate to the main page of the repository you want to manage.
  2. Access repository settings: Click on the "Settings" tab under your repository name.
  3. Go to Collaborators & teams: In the sidebar, under the "Access" section, click on "Collaborators & teams".
  4. Manage access: Under "Manage access", locate the existing user you want to make an administrator.
  5. Change their role: Next to the user's name, select the "Role" dropdown menu and click on "Administrator".

Important Note: If the user is already the Owner of the repository, then they do not have to add themselves as an Administrator of the repository and indeed cannot. Repository owners automatically have all administrative permissions.

Important Notes for Site Administrators:

  • Repository Admin can manage the repository but cannot modify site-wide settings
  • Site Admin retains full control over the Forgejo instance
  • Changes take effect immediately for the user
  • Consider the security implications of granting admin access

Step 2: Get Registration Token

  1. Go to your Forgejo repository
  2. Navigate to Settings → Actions → Runners
  3. Click "New runner"
  4. Copy the registration token

Step 3: Register the Runner

# Switch to CI_DEPLOY_USER to register the runner
sudo su - CI_DEPLOY_USER

cd ~

# Register the runner with your Forgejo instance
forgejo-runner register \
  --instance https://your-forgejo-instance \
  --token YOUR_REGISTRATION_TOKEN \
  --name "ci-runner" \
  --labels "ci" \
  --no-interactive

Important: Replace your-forgejo-instance with your actual Forgejo instance URL and YOUR_REGISTRATION_TOKEN with the token you copied from Step 2. Also make sure it ends in a /.

Note: The your-forgejo-instance should be the base URL of your Forgejo instance (e.g., https://git.<your-domain>/), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.

What this does:

  • Creates the required .runner configuration file in the CI_DEPLOY_USER's home directory
  • Registers the runner with your Forgejo instance
  • Sets up the runner with appropriate labels for Ubuntu and Docker environments

Step 4: Set Up System Configuration

# Create system config directory for Forgejo runner
sudo mkdir -p /var/lib/forgejo-runner

# Copy the runner configuration to system location
sudo mv /home/CI_DEPLOY_USER/.runner /var/lib/forgejo-runner/.runner

# Set proper ownership and permissions
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /var/lib/forgejo-runner/.runner
sudo chmod 600 /var/lib/forgejo-runner/.runner

What this does:

  • Copies the configuration to the system location (/var/lib/forgejo-runner/.runner)
  • Sets proper ownership and permissions for CI_SERVICE_USER to access the config
  • Registers the runner with your Forgejo instance
  • Sets up the runner with appropriate labels for Ubuntu and Docker environments

Step 5: Create and Enable Systemd Service

sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
# /etc/systemd/system/forgejo-runner.service
[Unit]
Description=Forgejo Actions Runner (CI, rootless)
Wants=network-online.target user@%U.service
After=network-online.target user@%U.service

[Service]
User=ci-service
Group=ci-service

# Point runner at the rootless Podman user socket; no TCP sockets.
Environment=XDG_RUNTIME_DIR=/run/user/%U
Environment=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/%U/bus
Environment=DOCKER_HOST=unix:///run/user/%U/podman/podman.sock

# Use your config that gives job containers outbound DNS/HTTPS (egress only)
ExecStart=/usr/bin/forgejo-runner daemon --config /etc/forgejo-runner-ci.yaml
Restart=always
RestartSec=2
NoNewPrivileges=yes

# Lock it down; allow writes only where needed for jobs/state
ProtectSystem=strict
ProtectHome=read-only
PrivateTmp=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictSUIDSGID=yes
LockPersonality=yes
CapabilityBoundingSet=
AmbientCapabilities=
SystemCallArchitectures=native
ReadWritePaths=/home/ci-service/.cache/act /var/lib/forgejo-runner-ci

[Install]
WantedBy=multi-user.target
EOF

# One-time prep as CI_DEPLOY_USER:

SVC=ci-service
RUN_UID=$(id -u "$SVC")

# Ensure the user manager + user socket exist
sudo loginctl enable-linger "$SVC"
sudo systemctl start "user@${RUN_UID}.service"
sudo -u "$SVC" XDG_RUNTIME_DIR=/run/user/$RUN_UID DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$RUN_UID/bus \
  systemctl --user enable --now podman.socket

# Create writable dirs for the hardened unit
sudo install -d -o "$SVC" -g "$SVC" -m 0750 /home/$SVC/.cache/act /var/lib/forgejo-runner-ci

# Point your runner config to the token in /var/lib (least privilege)
# /etc/forgejo-runner/config.yaml -> runner.file: /var/lib/forgejo-runner/.runner

# Reload + start the system unit
sudo systemctl daemon-reload
sudo systemctl enable --now forgejo-runner.service

# Enable the service via user manager
sudo systemctl enable forgejo-runner.service

What this does:

  • Creates the systemd service configuration for the Forgejo runner
  • Sets the working directory to /var/lib/forgejo-runner where the .runner configuration file is located
  • The runner will start here but the CI workflow will deploy the application to /opt/APP_NAME
  • Enables the service to start automatically on boot
  • Sets up proper restart behavior for reliability

5.3 Start Service

# Start the Forgejo runner service
sudo systemctl start forgejo-runner.service

# Verify the service is running
sudo systemctl status forgejo-runner.service

Expected Output: The service should show "active (running)" status.

5.4 Test Runner Configuration

# Check if the runner is running
sudo systemctl status forgejo-runner.service

# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager

# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-runner" with status "Online"

Expected Output:

  • systemctl status should show "active (running)"
  • Forgejo web interface should show the runner as online with "ci" label

If something goes wrong:

  • Check logs: sudo journalctl -u forgejo-runner.service -f
  • Verify token: Make sure the registration token is correct
  • Check network: Ensure the runner can reach your Forgejo instance
  • Restart service: sudo systemctl restart forgejo-runner.service

Step 6: Set Up Podman Host Socket Service

CRITICAL: Before proceeding with PiP containers, you must start the Podman host socket service that provides the UNIX socket for container communication.

6.1 Install and Start Podman Host Socket Service (Security-Hardened)

# Switch to CI_DEPLOY_USER (who has sudo privileges for system configuration)
sudo su - CI_DEPLOY_USER

# Configuration
SVC_USER="CI_SERVICE_USER"
PODMAN="/usr/bin/podman"
UNIT_DST="/etc/systemd/user/podman-host-socket.service"

# 0) Preconditions
ps -p 1 -o comm= | grep -qx systemd || { echo "PID1 is not systemd"; exit 1; }
command -v "$PODMAN" >/dev/null || { echo "podman not found at $PODMAN"; exit 1; }

# 1) Install the hardened unit (global user unit, root-owned, read-only)
sudo install -o root -g root -m 0644 /dev/stdin "$UNIT_DST" <<'EOF'
[Unit]
Description=Rootless Podman REST (UNIX socket only)
After=default.target

[Service]
Type=simple
UMask=007
NoNewPrivileges=yes
# %t expands to /run/user/$UID for *user* services
ExecStartPre=/usr/bin/mkdir -p %t/podman-host
ExecStartPre=/usr/bin/chmod 770 %t/podman-host
ExecStart=/usr/bin/podman --log-level=info system service --time=0 unix://%t/podman-host/podman.sock
Restart=always
RestartSec=2

[Install]
WantedBy=default.target
EOF

sudo stat -c '%U:%G %a %n' "$UNIT_DST" | grep -q 'root:root 644' || { echo "Unit perms wrong"; exit 1; }

# 2) Ensure rootless prerequisites (Ubuntu/Debian)
if command -v apt >/dev/null 2>&1; then
  sudo apt-get update -y
  sudo apt-get install -y dbus-user-session uidmap slirp4netns fuse-overlayfs
fi

# Ensure subuid/subgid (safe if already present)
if ! grep -q "^${SVC_USER}:" /etc/subuid; then echo "${SVC_USER}:100000:65536" | sudo tee -a /etc/subuid >/dev/null; fi
if ! grep -q "^${SVC_USER}:" /etc/subgid; then echo "${SVC_USER}:100000:65536" | sudo tee -a /etc/subgid >/dev/null; fi


# TODO: Try this instead of the below few steps
# one-time
sudo apt-get update -y && sudo apt-get install -y systemd-container
sudo loginctl enable-linger ci-service
sudo systemctl start "user@$(id -u ci-service).service"

# now you can do this anywhere, no env exports:
sudo systemctl --user --machine=ci-service@ daemon-reload
sudo systemctl --user --machine=ci-service@ enable --now podman.socket
sudo systemctl --user --machine=ci-service@ status podman.socket --no-pager


# 3) Enable linger so the user's manager runs without login
sudo loginctl enable-linger "$SVC_USER"
loginctl show-user "$SVC_USER" | grep -q '^Linger=yes' || { echo "Linger not enabled"; exit 1; }

# 4) Start the user's systemd instance and point to its bus
uid=$(id -u "$SVC_USER")
sudo systemctl start "user@${uid}.service"

export XDG_RUNTIME_DIR=/run/user/$uid
export DBUS_SESSION_BUS_ADDRESS=unix:path=$XDG_RUNTIME_DIR/bus

# 5) Enable+start the unit in the *user* manager (acting as the user)
sudo -u "$SVC_USER" XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS \
  systemctl --user daemon-reload

sudo -u "$SVC_USER" XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS \
  systemctl --user enable --now podman-host-socket.service

# 6) Verify directory, socket, and connectivity (as the user)
sudo -u "$SVC_USER" bash -lc '
  R="/run/user/$(id -u)"
  ls -ld "$R" "$R/podman-host"
  ls -l  "$R/podman-host/podman.sock"
  podman --url unix://$R/podman-host/podman.sock version
  podman --url unix://$R/podman-host/podman.sock info
'

# 7) Belt & suspenders: ensure no Docker/Podman TCP daemons
ss -ltnp | grep -E '(2375|2376)' && { echo "ERROR: TCP daemon detected"; exit 1; } || echo "No TCP sockets (good)"

What this does:

  • FHS compliance: Moves service file to /etc/systemd/user/ (standard system location)
  • Proper ownership: Service file owned by root with appropriate permissions
  • Enables lingering: Allows CI_SERVICE_USER's systemd services to start at boot
  • Starts socket service: Creates the UNIX socket that PiP containers will use
  • Verifies operation: Ensures the socket is properly created and accessible

If you see cgroup warnings:

# If you see warnings about systemd user session, enable lingering properly
sudo loginctl enable-linger CI_SERVICE_USER

# Verify lingering is enabled
sudo loginctl show-user CI_SERVICE_USER | grep Linger
# Should show: Linger=yes

6.2 Verify Socket Permissions

# Ensure proper socket permissions (should be 660)
chmod 660 /run/user/$(id -u)/podman-host/podman.sock

# Verify socket is accessible
podman --url unix:///run/user/$(id -u)/podman-host/podman.sock info

Expected Output:

  • Socket service should show "active (running)"
  • UNIX socket should exist at /run/user/999/podman-host/podman.sock
  • Socket should have permissions srw-rw----
  • Podman commands should work through the socket

Environment Variables for PiP Scripts

Before proceeding with Section 7, you need to understand the environment variables used by the PiP (Podman-in-Podman) scripts. These variables control the behavior of secure_pip_setup.sh and pip_ready.sh and are automatically set in CI environments but may need manual configuration for local testing.

Required Environment Variables

1. PODMAN_CLIENT_IMG_DIGEST (REQUIRED)

  • Purpose: Pinned image digest for the Podman client container used in PiP
  • Format: Must be a full digest reference including the registry URL (e.g., quay.io/podman/stable@sha256:...)
  • How to obtain:
    # Host arch (should print amd64)
    podman info --format '{{.Host.Arch}}'
    # Get Podman client image digest
    DIGEST=$(podman manifest inspect quay.io/podman/stable:latest | jq -r '.manifests[] | select(.platform.os=="linux" and .platform.architecture=="amd64") | .digest')
    # Combine with registry URL to create full digest reference
    export PODMAN_CLIENT_IMG_DIGEST="quay.io/podman/stable@${DIGEST}"
    echo "PODMAN_CLIENT_IMG_DIGEST=${PODMAN_CLIENT_IMG_DIGEST}"
    # Result: quay.io/podman/stable@sha256:5dd9f78bd233970ea4a36bb65d5fc63b7edbb9c7f800ab7901fa912564f36415
    
  • Security Importance: Prevents supply chain attacks by ensuring only verified images are used

2. RUN_ID (Optional, Auto-detected)

  • Purpose: Unique identifier for each CI run to prevent container name conflicts
  • Default: $GITHUB_RUN_ID in CI, local for manual runs
  • Usage: Used to create unique container names like ci-pip-123

3. PIP_NAME (Optional, Auto-generated)

  • Purpose: Name of the PiP container for readiness checking
  • Default: ci-pip-${RUN_ID} (e.g., ci-pip-123)
  • Usage: Used by pip_ready.sh to monitor container readiness

4. SOCKET_PATH (Optional, Auto-detected)

  • Purpose: Path to the Podman UNIX socket for container communication
  • Default: $XDG_RUNTIME_DIR/podman-host/podman.sock
  • How to verify:
    # Check if socket exists
    ls -la /run/user/$(id -u)/podman-host/podman.sock
    # Should show: srw-rw---- 1 user user 0 ... /run/user/1000/podman-host/podman.sock
    

5. WORKSPACE (Optional, Auto-detected)

  • Purpose: Directory containing the application code for volume mounting
  • Default: $GITHUB_WORKSPACE in CI, $PWD for manual runs
  • Usage: Mounted as /workspace inside PiP containers

6. PIP_UID and PIP_GID (Optional)

  • Purpose: User and group IDs for the PiP container (security hardening)
  • Default: 1000:1000 (non-root user)
  • Security Benefit: Prevents privilege escalation by running as non-root

7. TIMEOUT and SLEEP (pip_ready.sh only)

  • Purpose: Control readiness probe timing
  • Defaults: TIMEOUT=30 (seconds), SLEEP=2 (seconds between checks)
  • Usage: Adjust for slower systems if needed

Environment Setup in CI vs Local

In CI Environment (Automatic):

  • All variables are automatically set by GitHub Actions/Forgejo
  • RUN_ID, WORKSPACE, and other GitHub-specific variables are pre-configured
  • Secrets like PODMAN_CLIENT_IMG_DIGEST come from repository secrets

For Local Testing (Manual Setup):

# Example local testing setup
export PODMAN_CLIENT_IMG_DIGEST="quay.io/podman/stable@sha256:..."
export RUN_ID="local-test"
export WORKSPACE="$(pwd)"

# Run the scripts
./secure_pip_setup.sh
./pip_ready.sh

Verification Commands:

# Check current environment variables
env | grep -E "(RUN_ID|PIP_NAME|SOCKET_PATH|WORKSPACE|PODMAN)"

# Test socket accessibility
ls -la "${SOCKET_PATH}"

# Verify Podman client image exists
podman manifest inspect "${PODMAN_CLIENT_IMG_DIGEST%%@*}"

Step 7: Set Up Ephemeral Podman-in-Podman (PiP) for Secure CI Operations

7.1 Secure Ephemeral PiP Container Setup

CRITICAL SECURITY NOTE: This setup uses ephemeral PiP containers with UNIX socket communication only - NO network ports exposed. Each CI run creates a fresh PiP container that is destroyed after completion.

# Switch to CI_SERVICE_USER (who has Podman access)
sudo su - CI_SERVICE_USER

# Navigate to the application directory
cd /opt/APP_NAME

# Make the secure setup scripts executable
chmod +x secure_pip_setup.sh pip_ready.sh

# Run the secure PiP setup script
./secure_pip_setup.sh

# Wait for PiP to be ready
./pip_ready.sh

What the secure scripts do:

  • secure_pip_setup.sh: Creates ephemeral PiP container with maximum security constraints
  • pip_ready.sh: Comprehensive readiness probe with retry logic and health checks

Security Features:

  • Ephemeral containers: Fresh PiP container per CI run, destroyed after completion
  • No exposed ports: UNIX socket communication only, no TCP ports
  • Least privilege: --cap-drop=ALL, --security-opt=no-new-privileges
  • Read-only rootfs: --read-only with tmpfs for writable directories
  • No network: --network=none for maximum isolation
  • Secure socket permissions: Proper ownership and 660 permissions

7.2 Integration Testing with PostgreSQL

The CI pipeline now includes comprehensive integration testing:

# Test PiP connectivity through secure socket
podman exec ci-pip-local podman version

# Start PostgreSQL for integration tests
podman exec ci-pip-local podman run -d \
  --name test-postgres \
  -e POSTGRES_PASSWORD=password \
  -e POSTGRES_USER=postgres \
  -e POSTGRES_DB=sharenet_test \
  -p 5432:5432 \
  "${POSTGRES_IMG_DIGEST}"

# Wait for PostgreSQL to be ready
podman exec ci-pip-local timeout 60 bash -c 'until podman exec test-postgres pg_isready -h localhost -p 5432 -U testuser; do sleep 1; done'

# Run backend unit tests
podman exec ci-pip-local podman run --rm \
  -v $(pwd)/backend:/workspace \
  -w /workspace \
  "${RUST_IMG_DIGEST}" \
  sh -c "cargo test --lib -- --test-threads=1"

# Run backend integration tests with real database
podman exec ci-pip-local podman run --rm \
  -v $(pwd)/backend:/workspace \
  -w /workspace \
  -e DATABASE_URL=postgres://testuser:testpassword@localhost:5432/testdb \
  "${RUST_IMG_DIGEST}" \
  sh -c "cargo test --test '*' -- --test-threads=1"

Testing Benefits:

  • Full integration testing: Real PostgreSQL database for backend tests
  • Isolated environment: Each test run gets fresh database
  • Comprehensive coverage: Unit tests + integration tests
  • Secure networking: Database only accessible within PiP container

7.3 CI/CD Workflow Architecture

The CI/CD pipeline uses ephemeral PiP containers with this secure workflow:

Job 1 (Backend Testing):

  • Creates ephemeral PiP container
  • Starts PostgreSQL for integration tests
  • Runs backend unit and integration tests
  • Tests database connectivity and migrations

Job 2 (Frontend Testing):

  • Reuses or creates new PiP container
  • Runs frontend tests with Node.js
  • Executes linting and type checking

Job 3 (Image Building):

  • Builds Docker images within PiP container
  • Pushes images to Forgejo Container Registry
  • Uses secure authentication from repository secrets

Job 4 (Cleanup):

  • Destroys PiP container and cleans up sockets
  • Ensures no persistent state between runs

Key Security Benefits:

  • 🛡️ Zero persistent state: No containers survive CI runs
  • 🛡️ No port exposure: All communication through UNIX sockets
  • 🛡️ Least privilege: Minimal capabilities, no root access
  • 🛡️ Network isolation: PiP containers have no external network
  • 🛡️ Ephemeral execution: Fresh environment every time

7.3 Set Up Workspace Directory

Important: The CI workflow needs a workspace directory for code checkout. This directory will be used by the Forgejo Actions runner.

# Switch to CI_DEPLOY_USER (who has sudo privileges)
sudo su - CI_DEPLOY_USER

# Create workspace directory in /tmp with proper permissions
sudo mkdir -p /tmp/ci-workspace
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /tmp/ci-workspace
sudo chmod 755 /tmp/ci-workspace

# Verify the setup
ls -la /tmp/ci-workspace

What this does:

  • Creates workspace: Provides a dedicated directory for CI operations
  • Proper ownership: CI_SERVICE_USER owns the directory for write access
  • Appropriate permissions: 755 allows read/write for owner, read for others
  • Temporary location: Uses /tmp for easy cleanup and no persistence needed

Alternative locations (if you prefer):

  • /opt/ci-workspace - More permanent location
  • /home/CI_SERVICE_USER/workspace - User's home directory
  • /var/lib/ci-workspace - System-managed location

Note: The CI workflow will use this directory for code checkout and then copy the contents to the DinD container.

FHS-Compliant Directory Structure

The Forgejo Container Registry setup uses the built-in registry functionality, providing secure and integrated container image storage.

Application Files (in /opt/APP_NAME/):

  • Configuration files for the application
  • Nginx configuration for reverse proxy

System Files (FHS-compliant locations):

  • /var/lib/ - Application data storage
  • /etc/nginx/ - Nginx configuration
  • /var/log/nginx/ - nginx proxy logs

Benefits of FHS Compliance:

  • Data persistence: Application data stored in proper locations survives restarts
  • Service management: Proper separation of application components
  • Log management: Centralized logging for easier troubleshooting
  • Log management: Logs in /var/log/nginx/ for centralized logging
  • Configuration separation: App configs in app directory, system data in system directories
  • Policy enforcement: Container policies for image signature verification

What this does:

  • Configures registry access: Properly sets up Forgejo Container Registry access in DinD
  • Fixes ownership issues: Ensures certificate has correct ownership for CA trust
  • Tests connectivity: Verifies DinD can pull, tag, and push images to Forgejo Container Registry
  • Validates setup: Ensures the complete CI/CD pipeline will work

7.4 CI/CD Workflow Architecture with Ephemeral PiP

The CI/CD pipeline uses ephemeral Podman-in-Podman containers with a secure four-stage approach:

Job 1 (Backend Testing) - Ephemeral PiP:

  • Purpose: Comprehensive backend testing with real PostgreSQL
  • Environment: Fresh PiP container with PostgreSQL for integration tests
  • Services:
    • PostgreSQL database for integration tests
    • Rust toolchain for backend testing
  • Security: No network exposure, UNIX socket only
  • Cleanup: PiP container destroyed after test completion

Job 2 (Frontend Testing) - Ephemeral PiP:

  • Purpose: Frontend testing and validation
  • Environment: Fresh PiP container with Node.js
  • Services: Node.js toolchain for frontend testing
  • Tests: Unit tests, linting, type checking, build verification
  • Cleanup: PiP container destroyed after test completion

Job 3 (Image Building) - Ephemeral PiP:

  • Purpose: Secure image building and registry push
  • Environment: Fresh PiP container for building
  • Process:
    • Builds backend and frontend images using Podman
    • Pushes images to Forgejo Container Registry
    • Uses secure authentication from repository secrets
  • Cleanup: PiP container destroyed after build completion

Job 4 (Cleanup) - System:

  • Purpose: Ensure no persistent state remains
  • Process: Removes any remaining containers and sockets
  • Security: Prevents resource accumulation and state persistence

Key Security Benefits:

  • 🛡️ Ephemeral Execution: Fresh PiP container for every job
  • 🛡️ Zero Port Exposure: No TCP ports, UNIX sockets only
  • 🛡️ Network Isolation: PiP containers have no external network
  • 🛡️ Least Privilege: Minimal capabilities, no root access
  • 🛡️ Complete Cleanup: No persistent state between runs
  • 🛡️ Secret Security: Authentication via Forgejo repository secrets

Testing Advantages:

  • Real Integration Testing: PostgreSQL database for backend tests
  • Fresh Environment: No test pollution between runs
  • Comprehensive Coverage: Unit + integration tests
  • Isolated Execution: Each test run completely independent

7.5 Production Deployment Architecture

The production deployment uses a separate pod configuration (prod-pod.yml) that pulls built images from the Forgejo Container Registry and deploys the complete application stack.

Production Stack Components:

  • PostgreSQL: Production database with persistent storage
  • Backend: Rust application built and pushed from CI/CD
  • Frontend: Next.js application built and pushed from CI/CD
  • Nginx: Reverse proxy with SSL termination

Deployment Flow:

  1. Production Runner: Runs on Production Linode with production label
  2. Image Pull: Pulls latest images from Forgejo Container Registry
  3. Stack Deployment: Uses prod-pod.yml to deploy complete stack
  4. Health Verification: Ensures all services are healthy before completion

Key Benefits:

  • 🔄 Image Registry: Centralized image storage in Forgejo Container Registry
  • 📦 Consistent Deployment: Same images tested in CI are deployed to production
  • Fast Deployment: Only pulls changed images
  • 🛡️ Rollback Capability: Can easily rollback to previous image versions
  • 📊 Health Monitoring: Built-in health checks for all services

Step 8: Security Verification and Testing

8.1 Security Audit - Verify NO Exposure

# Verify NO containers expose ports to external network
podman ps --format "table {{.Names}}\t{{.Ports}}"
# Should show empty or only internal ports

# Verify firewall is active and blocking all unnecessary ports
sudo ufw status verbose

# Check listening ports - should only show SSH (22) and system services
sudo ss -tulpn | grep -E "(2375|2376|4443|5000)"
# Should return empty - no container management ports exposed

# Verify PiP container security settings
podman inspect ci-pip | grep -E "(Privileged|SecurityOpt|Capabilities)"
# Should show: "Privileged": false, security options present, limited capabilities

8.2 Test Podman Installation

podman --version

8.3 Test Secure PiP Functionality

# Test PiP operations through secure socket
podman exec ci-pip podman version

# Test image pulling through secure environment
podman exec ci-pip podman pull alpine:latest

# Test container operations within PiP
podman exec ci-pip podman run --rm alpine:latest echo "Secure PiP working"

8.4 Network Security Test

# Attempt to access container ports from external perspective (should fail)
curl -v http://localhost:2375/_ping  # Should fail with connection refused
curl -v http://0.0.0.0:2375/_ping    # Should fail with connection refused

# Test that essential outgoing connections work
podman exec ci-pip podman search alpine

Part 2: Production Linode Setup

Step 9: Initial System Setup

9.1 Update the System

sudo apt update && sudo apt upgrade -y

9.2 Configure Timezone

# Configure timezone interactively
sudo dpkg-reconfigure tzdata

# Verify timezone setting
date

What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).

Expected output: After selecting your timezone, the date command should show the current date and time in your selected timezone.

9.3 Configure /etc/hosts

# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts

# Verify the configuration
cat /etc/hosts

What this does:

  • Adds localhost entries for both IPv4 and IPv6 addresses to /etc/hosts
  • Ensures proper localhost resolution for both IPv4 and IPv6

Important: Replace YOUR_PRODUCTION_IPV4_ADDRESS and YOUR_PRODUCTION_IPV6_ADDRESS with the actual IPv4 and IPv6 addresses of your Production Linode obtained from your Linode dashboard.

Expected output: The /etc/hosts file should show entries for 127.0.0.1, ::1, and your Linode's actual IP addresses all mapping to localhost.

9.4 Install Essential Packages

sudo apt install -y \
    curl \
    wget \
    git \
    ca-certificates \
    apt-transport-https \
    software-properties-common \
    ufw \
    fail2ban \
    htop \
    nginx \
    certbot \
    python3-certbot-nginx

9.5 Configure Firewall and Fail2ban FIRST - Before Any Services

SECURITY FIRST: Configure firewall and intrusion prevention BEFORE any services are installed or exposed to prevent attackers from exploiting open ports.

# Configure secure firewall defaults
sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

Security Note: We only allow ports 80 and 443 for external access. The application services (backend on 3001, frontend on 3000) are only accessible through the Nginx reverse proxy, which provides better security and SSL termination.

Fail2ban Configuration:

Fail2ban is an intrusion prevention system that monitors logs and automatically blocks IP addresses showing malicious behavior.

# Install fail2ban (if not already installed)
sudo apt install -y fail2ban

# Create a custom jail configuration
sudo tee /etc/fail2ban/jail.local > /dev/null << 'EOF'
[DEFAULT]
# Ban time in seconds (24 hours)
bantime = 86400
# Find time in seconds (10 minutes)
findtime = 600
# Max retries before ban
maxretry = 3
# Ban action (use ufw since we're using ufw firewall)
banaction = ufw
# Log level
loglevel = INFO
# Log target
logtarget = /var/log/fail2ban.log

# SSH protection
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3

# Note: Nginx protection is handled by the firewall and application-level security
# Docker containers are isolated, and Nginx logs are not directly accessible to fail2ban
# Web attack protection is provided by:
# 1. UFW firewall (ports 80/443 only)
# 2. Nginx security headers and rate limiting
# 3. Application-level input validation
EOF

# Enable and start fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

# Verify fail2ban is running
sudo systemctl status fail2ban

# Check current jails
sudo fail2ban-client status

What this does:

  • SSH Protection: Blocks IPs that fail SSH login 3 times in 10 minutes
  • 24-hour bans: Banned IPs are blocked for 24 hours
  • Automatic monitoring: Continuously watches SSH logs

Web Security Note: Since Nginx runs in a Docker container, web attack protection is handled by:

  • UFW Firewall: Only allows ports 80/443 (no direct access to app services)
  • Nginx Security: Built-in rate limiting and security headers
  • Application Security: Input validation in the backend/frontend code

Monitoring Fail2ban:

# Check banned IPs
sudo fail2ban-client status sshd

# Unban an IP if needed
sudo fail2ban-client set sshd unbanip IP_ADDRESS

# View fail2ban logs
sudo tail -f /var/log/fail2ban.log

# Check all active jails
sudo fail2ban-client status

9.6 Secure SSH Configuration

Critical Security Step: After setting up SSH key authentication, you must disable password authentication and root login to secure your Production server.

Step 1: Edit SSH Configuration File

# Open the SSH configuration file using nano
sudo nano /etc/ssh/sshd_config

Step 2: Disallow Root Logins

Find the line that says:

#PermitRootLogin prohibit-password

Change it to:

PermitRootLogin no

Step 3: Disable Password Authentication

Find the line that says:

#PasswordAuthentication yes

Change it to:

PasswordAuthentication no

Step 4: Configure Protocol Family (Optional)

If you only need IPv4 connections, find or add:

#AddressFamily any

Change it to:

AddressFamily inet

Step 5: Save and Exit

  • Press Ctrl + X to exit
  • Press Y to confirm saving
  • Press Enter to confirm the filename

Step 6: Test SSH Configuration

# Test the SSH configuration for syntax errors
sudo sshd -t

Step 7: Restart SSH Service

For Ubuntu 24.04 LTS (socket-based activation):

sudo systemctl restart ssh

For other distributions:

sudo systemctl restart sshd

Step 8: Verify SSH Access

IMPORTANT: Test SSH access from a new terminal window before closing your current session:

# Test Production Linode
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "SSH configuration test successful"'

What these changes do:

  • PermitRootLogin no: Completely disables root SSH access
  • PasswordAuthentication no: Disables password-based authentication
  • AddressFamily inet: Listens only on IPv4 (optional, for additional security)

Security Benefits:

  • No root access: Eliminates the most common attack vector
  • Key-only authentication: Prevents brute force password attacks
  • Protocol restriction: Limits SSH to IPv4 only (if configured)

Emergency Access:

If you lose SSH access, you can still access the server through:

  • Linode Console: Use the Linode dashboard's console access
  • Emergency mode: Boot into single-user mode if needed

Verification Commands:

# Check SSH configuration
sudo grep -E "(PermitRootLogin|PasswordAuthentication|AddressFamily)" /etc/ssh/sshd_config

# Check SSH service status
sudo systemctl status ssh

# Check SSH logs for any issues
sudo journalctl -u ssh -f

# Test SSH access from a new session
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'whoami'

Expected Output:

  • PermitRootLogin no
  • PasswordAuthentication no
  • AddressFamily inet (if configured)
  • SSH service should be "active (running)"
  • Test commands should return the deployment user name

Important Security Notes:

  1. Test before closing: Always test SSH access from a new session before closing your current SSH connection
  2. Keep backup: You can restore the original configuration if needed
  3. Monitor logs: Check /var/log/auth.log for SSH activity and potential attacks
  4. Regular updates: Keep SSH and system packages updated for security patches

Alternative: Manual Configuration with Backup

If you prefer to manually edit the file with a backup:

# Create backup
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup

# Edit the file
sudo nano /etc/ssh/sshd_config

# Test configuration
sudo sshd -t

# Restart service
sudo systemctl restart ssh

Step 10: Create Users

10.1 Create the PROD_SERVICE_USER User

# Create dedicated group for the production service account
sudo groupadd -r PROD_SERVICE_USER

# Create production service account user with dedicated group
sudo useradd -r -g PROD_SERVICE_USER -s /bin/bash -m -d /home/PROD_SERVICE_USER PROD_SERVICE_USER
echo "PROD_SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd

10.2 Verify Users

sudo su - PROD_SERVICE_USER
whoami
pwd
exit

sudo su - PROD_DEPLOY_USER
whoami
pwd
exit

Step 11: Install Forgejo Actions Runner

5.0 A clean FHS-aligned setup where prod-deploy (sudo) installs/configures everything system-wide, and the Forgejo runner + rootless Podman run as prod-service. We keep configs under /etc, state under /var/lib, cache under /var/cache, logs via journald, binaries in /usr/local/bin, and user runtime in /run/user/.

11.1 Create vars

SVC_USER="prod-service" # non-sudo service user that runs jobs SVC_UID="$(id -u "$SVC_USER" 2>/dev/null || echo)" || true FORGEJO_URL="https://git.gcdo.org/" RUNNER_NAME="prod-runner" RUNNER_LABELS="prod"

11.2 System prerequisites (packages, idmaps, linger)

Packages (Ubuntu 24.04)

sudo apt-get update -y sudo apt-get install -y podman uidmap slirp4netns fuse-overlayfs dbus-user-session curl jq ca-certificates

Ensure the service user exists

id "$SVC_USER" >/dev/null 2>&1 || sudo adduser --disabled-password --gecos "" "$SVC_USER" SVC_UID="$(id -u "$SVC_USER")"

Subordinate ID ranges for rootless

grep -q "^${SVC_USER}:" /etc/subuid || echo "${SVC_USER}:100000:65536" | sudo tee -a /etc/subuid >/dev/null grep -q "^${SVC_USER}:" /etc/subgid || echo "${SVC_USER}:100000:65536" | sudo tee -a /etc/subgid >/dev/null

Ensure the user manager exists/runs at boot

sudo loginctl enable-linger "$SVC_USER" sudo systemctl start "user@${SVC_UID}.service"

11.3 Rootless Podman socket (user scope; runtime in /run/user/)

Tell root-invoked systemctl which user bus/runtime to target

export XDG_RUNTIME_DIR="/run/user/${SVC_UID}" export DBUS_SESSION_BUS_ADDRESS="unix:path=${XDG_RUNTIME_DIR}/bus"

Enable the users Docker-API-compatible Podman UNIX socket (no TCP)

sudo -u "$SVC_USER" XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS
systemctl --user enable --now podman.socket

Verify the UNIX socket path under /run (FHS: volatile runtime data)

sudo -u "$SVC_USER" ss -lx | grep 'podman/podman.sock' >/dev/null || { echo "Podman user socket missing"; exit 1; }

11.4 Place the runner binary in /usr/local/bin (local admin install)

sudo install -d -m 0755 /usr/local/bin

Get the latest version dynamically

LATEST_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases | jq -r '.[0].tag_name') echo "Downloading Forgejo runner version: $LATEST_VERSION"

curl -fsSL "https://code.forgejo.org/forgejo/runner/releases/download/${LATEST_VERSION}/forgejo-runner-${LATEST_VERSION#v}-linux-amd64"
| sudo tee /usr/local/bin/forgejo-runner >/dev/null sudo chmod 0755 /usr/local/bin/forgejo-runner

11.5 Create FHS directories for runner state and cache

State (tokens, work dirs) → /var/lib; Cache → /var/cache

sudo install -d -o "$SVC_USER" -g "$SVC_USER" -m 0750 /var/lib/forgejo-runner sudo install -d -o "$SVC_USER" -g "$SVC_USER" -m 0750 /var/lib/forgejo-runner/work sudo install -d -o "$SVC_USER" -g "$SVC_USER" -m 0750 /var/cache/forgejo-runner

11.6 Register the runner as prod-service with state in /var/lib

REG_TOKEN="PASTE_A_FRESH_REGISTRATION_TOKEN_HERE" # short-lived token

Run registration inside /var/lib/forgejo-runner so .runner lands there

Ensure state dir exists and owned by the service user

sudo install -d -o prod-service -g prod-service -m 0750 /var/lib/forgejo-runner

Re-register and write .runner into /var/lib/forgejo-runner

sudo -u prod-service
FORGEJO_URL="$FORGEJO_URL"
REG_TOKEN="$REG_TOKEN"
RUNNER_NAME="$RUNNER_NAME"
RUNNER_LABELS="$RUNNER_LABELS"
bash -lc ' set -Eeuo pipefail cd /var/lib/forgejo-runner pwd # should print: /var/lib/forgejo-runner /usr/local/bin/forgejo-runner register
--instance "$FORGEJO_URL"
--token "$REG_TOKEN"
--name "$RUNNER_NAME"
--labels "$RUNNER_LABELS"
--no-interactive chmod 600 .runner stat -c "%U:%G %a %n" .runner '

11.7 System-wide runner config in /etc

sudo install -d -m 0755 /etc/forgejo-runner sudo tee /etc/forgejo-runner/config.yaml >/dev/null <<EOF log: level: info runner: file: /var/lib/forgejo-runner/.runner capacity: 1 fetch_timeout: 5s report_interval: 1s container: engine: docker docker_host: "unix:///run/user/${SVC_UID}/podman/podman.sock" # user UNIX socket enable_ipv6: false host: workdir_parent: /var/lib/forgejo-runner/work EOF sudo chmod 0644 /etc/forgejo-runner/config.yaml

11.8 Create a system unit for the runner in /etc/systemd/system

(Runs as prod-service, with FHS paths & env pointing at the users runtime/socket.)

sudo tee /etc/systemd/system/forgejo-runner.service >/dev/null <<EOF [Unit] Description=Forgejo Actions Runner (Production) After=network-online.target Wants=network-online.target

[Service] User=prod-service Group=prod-service WorkingDirectory=/var/lib/forgejo-runner Environment=FORGEJO_RUNNER_CONFIG=/etc/forgejo-runner/config.yaml Environment=XDG_RUNTIME_DIR=/run/user/999 Environment=DOCKER_HOST=unix:///run/user/999/podman/podman.sock

ExecStart=/usr/local/bin/forgejo-runner daemon

NoNewPrivileges=yes PrivateTmp=yes ProtectSystem=strict ProtectHome=read-only ReadWritePaths=/var/lib/forgejo-runner /etc/forgejo-runner LockPersonality=yes RestrictSUIDSGID=yes ProtectKernelTunables=yes ProtectControlGroups=yes ProtectKernelLogs=yes ProtectClock=yes RestrictNamespaces=yes SystemCallFilter=@system-service CapabilityBoundingSet= AmbientCapabilities= Restart=always RestartSec=2s

[Install] WantedBy=multi-user.target EOF

sudo systemctl daemon-reload sudo systemctl enable --now forgejo-runner.service sudo systemctl status forgejo-runner.service --no-pager

11.9 Security sanity (no Docker/Podman TCP, correct socket, minimal exposure)

No Docker/Podman TCP sockets (2375/2376)

ss -ltnp | grep -E '(2375|2376)' && { echo "ERROR: Docker/Podman TCP open"; exit 1; } || echo "OK: no Docker/Podman TCP sockets"

Root podman services should be disabled/inactive (we use user socket)

sudo systemctl is-enabled podman.socket 2>/dev/null || echo "OK: root podman.socket not enabled" sudo systemctl is-active podman.socket 2>/dev/null || echo "OK: root podman.socket not active"

Runner sees the user socket

sudo -iu "$SVC_USER" podman info --format '{{.Host.ServiceIsRemote}} {{.Host.RemoteSocket.Path}}'

Expect: true unix:///run/user//podman/podman.sock (needs to be true because Forgejo runner needs to connect via the Docker REST API)

11.10 Test Runner Configuration

# Check if the runner is running
sudo systemctl status forgejo-runner.service

# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager

# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-runner" with status "Online"

Part 3: Final Configuration and Testing

Step 12: Configure Forgejo Repository Secrets

Go to your Forgejo repository and add these secrets in Settings → Secrets and Variables → Actions:

Required Secrets:

  • APP_NAME: Your application name (e.g., sharenet)
  • REGISTRY_HOST: Your Forgejo instance's registry URL
  • REGISTRY_USERNAME: Your Forgejo username for registry authentication
  • REGISTRY_TOKEN: Personal Access Token with write:packages scope for registry pushes
  • PODMAN_CLIENT_IMG_DIGEST: Pinned Podman client image digest (e.g., quay.io/podman/stable@sha256:...)
  • RUST_IMG_DIGEST: Pinned Rust image digest (e.g., docker.io/library/rust@sha256:...)
  • NODE_IMG_DIGEST: Pinned Node.js image digest (e.g., docker.io/library/node@sha256:...)
  • POSTGRES_IMG_DIGEST: Pinned PostgreSQL image digest (e.g., docker.io/library/postgres@sha256:...)
  • PROD_BACKEND_HOST
  • PROD_BACKEND_PORT
  • PROD_DB_USERNAME
  • PROD_DB_PASSWORD
  • PROD_DB_HOST
  • PROD_DB_PORT
  • PROD_DB_DATABASE_NAME
  • PROD_FRONTEND_PORT

How to Obtain Each Secret (with Purpose and Commands):

1. Image Digests (Security-Critical - Prevent Supply Chain Attacks):

  • PODMAN_CLIENT_IMG_DIGEST: Used for secure ephemeral PiP containers in CI

    # Get Podman client image digest and create full reference
    DIGEST=$(podman manifest inspect quay.io/podman/stable:latest | jq -r '.manifests[] | select(.platform.os=="linux" and .platform.architecture=="amd64") | .digest')
    export PODMAN_CLIENT_IMG_DIGEST="quay.io/podman/stable@${DIGEST}"
    echo "PODMAN_CLIENT_IMG_DIGEST=${PODMAN_CLIENT_IMG_DIGEST}"
    # Result: PODMAN_CLIENT_IMG_DIGEST=quay.io/podman/stable@sha256:482bce3a829893f0dc3bf497c9a7609341fca11b34e35a92d308eb971ad61adb
    
  • RUST_IMG_DIGEST: Used for Rust backend testing and building

    # Get Rust image digest and create full reference
    DIGEST=$(podman manifest inspect docker.io/library/rust:latest | jq -r '.manifests[] | select(.platform.os=="linux" and .platform.architecture=="amd64") | .digest')
    export RUST_IMG_DIGEST="docker.io/library/rust@${DIGEST}"
    echo "RUST_IMG_DIGEST=${RUST_IMG_DIGEST}"
    # Result: RUST_IMG_DIGEST=docker.io/library/rust@sha256:f61d2a4020b0dec1f21c2320fdcb8b256dd96dfc015a090893b11841bb708983
    
  • NODE_IMG_DIGEST: Used for Node.js frontend testing and building

    # Get Node.js image digest and create full reference
    DIGEST=$(podman manifest inspect docker.io/library/node:latest | jq -r '.manifests[] | select(.platform.os=="linux" and .platform.architecture=="amd64") | .digest')
    export NODE_IMG_DIGEST="docker.io/library/node@${DIGEST}"
    echo "NODE_IMG_DIGEST=${NODE_IMG_DIGEST}"
    # Result: NODE_IMG_DIGEST=docker.io/library/node@sha256:9d4ff7cc3a5924a28389087d9735dfbf77ccb04bc3a0d5f86016d484dfa965c1
    
  • POSTGRES_IMG_DIGEST: Used for PostgreSQL database in integration tests

    # Get PostgreSQL image digest and create full reference
    DIGEST=$(podman manifest inspect docker.io/library/postgres:latest | jq -r '.manifests[] | select(.platform.os=="linux" and .platform.architecture=="amd64") | .digest')
    export POSTGRES_IMG_DIGEST="docker.io/library/postgres@${DIGEST}"
    echo "POSTGRES_IMG_DIGEST=${POSTGRES_IMG_DIGEST}"
    # Result: POSTGRES_IMG_DIGEST=docker.io/library/postgres@sha256:16508ad37e81dd63a94cdc620b0cfa1b771c4176b4e0f1cbc3a670431643e3ed
    

3. Forgejo Registry Credentials (Image Storage):

  • REGISTRY_HOST: Your Forgejo instance hostname (e.g., git.example.com) Purpose: Where to push/pull container images

  • REGISTRY_USERNAME: Your Forgejo username Purpose: Authentication for registry pushes

  • REGISTRY_TOKEN: Personal Access Token with write:packages scope Purpose: Secure authentication without password exposure How to create: Forgejo → Settings → Applications → Generate New Token

4. Application Configuration:

  • APP_NAME: Your application name (e.g., sharenet) Purpose: Image naming and directory structure

Security Note: All secrets are managed by Forgejo and never exposed in logs or environment variables. The ephemeral PiP approach ensures secrets are only used during execution and never persist.

Note: This setup uses custom Dockerfiles for testing environments with base images. The CI pipeline automatically checks if base images exist in Forgejo Container Registry and pulls them from Docker Hub only when needed, eliminating rate limiting issues and providing better control over the testing environment.

Step 13: Test Complete Pipeline

13.1 Trigger a Test Build

  1. Make a small change to your repository (e.g., update a comment or add a test file)
  2. Commit and push the changes to trigger the CI/CD pipeline
  3. Monitor the build in your Forgejo repository → Actions tab

13.2 Verify Pipeline Steps

The pipeline should execute these steps in order:

  1. Checkout: Clone the repository
  2. Setup DinD: Configure Docker-in-Docker environment
  3. Test Backend: Run backend tests in isolated environment
  4. Test Frontend: Run frontend tests in isolated environment
  5. Build Backend: Build backend Docker image in DinD
  6. Build Frontend: Build frontend Docker image in DinD
  7. Push to Registry: Push images to Forgejo Container Registry from DinD
  8. Deploy to Production: Deploy to production server

13.3 Check Forgejo Container Registry

# On CI/CD Linode
cd /opt/APP_NAME

# Check if new images were pushed (using unauthenticated port 443)
curl -k https://localhost:443/v2/_catalog

# Check specific repository tags
curl -k https://localhost:443/v2/APP_NAME/backend/tags/list
curl -k https://localhost:443/v2/APP_NAME/frontend/tags/list

# Alternative: Check registry via public endpoint
curl -k https://YOUR_CI_CD_IP/v2/_catalog

# Check authenticated endpoint (should require authentication)
curl -k https://YOUR_CI_CD_IP:4443/v2/_catalog
# Expected: This should return authentication error without credentials

13.4 Verify Production Deployment

# On Production Linode
cd /opt/APP_NAME

# Check if pods are running with new images
podman pod ps

# Check application health
curl http://localhost:3000
curl http://localhost:3001/health

# Check container logs for any errors
podman logs sharenet-production-pod-backend
podman logs sharenet-production-pod-frontend

13.5 Test Application Functionality

  1. Frontend: Visit your production URL (IP address)
  2. Backend API: Test API endpoints
  3. Database: Verify database connections
  4. Logs: Check for any errors in application logs

Step 14: Final Verification

14.1 Security Check

# Check firewall status
sudo ufw status

# Check fail2ban status
sudo systemctl status fail2ban

# Check SSH access (should be key-based only)
sudo grep "PasswordAuthentication" /etc/ssh/sshd_config

14.2 Performance Check

# Check system resources
htop

# Check disk usage
df -h

# Check Docker disk usage
docker system df

14.3 Backup Verification

# Test backup script
cd /opt/APP_NAME
./scripts/backup.sh --dry-run

# Run actual backup
./scripts/backup.sh

Step 15: Documentation and Maintenance

15.1 Update Documentation

  1. Update README.md with deployment information
  2. Document environment variables and their purposes
  3. Create troubleshooting guide for common issues
  4. Document backup and restore procedures

15.2 Set Up Monitoring Alerts

# Set up monitoring cron job
(crontab -l 2>/dev/null; echo "*/5 * * * * cd /opt/APP_NAME && ./scripts/monitor.sh --type production >> /tmp/monitor.log 2>&1") | crontab -

# Check monitoring logs
tail -f /tmp/monitor.log

15.3 Regular Maintenance Tasks

Daily:

  • Check application logs for errors
  • Monitor system resources
  • Verify backup completion

Weekly:

  • Review security logs
  • Update system packages
  • Test backup restoration

Monthly:

  • Review and rotate logs
  • Review and update documentation

Forgejo Container Registry Setup

This repository uses Forgejo's built-in container registry which provides a simpler, more integrated solution while maintaining the same Podman Pods CI approach.

Prerequisites

Before using this setup, ensure that:

  1. Forgejo Container Registry (Packages) is enabled on your Forgejo instance
  2. App repository (or its owner org) is public so anonymous pulls work
  3. A PAT with write:packages scope exists for CI pushes

Repository Secrets

The CI pipeline uses the following secrets. Only the required secrets are needed for current CI operations:

Secret Name Description Example Required
APP_NAME Your application name sharenet Yes
REGISTRY_HOST Your Forgejo instance hostname forgejo.example.com Yes
REGISTRY_USERNAME Bot or owner account username ci-bot Yes
REGISTRY_TOKEN PAT with write:packages scope gto_... Yes
SSH_PRIVATE_KEY SSH private key for deployment -----BEGIN OPENSSH PRIVATE KEY-----... Yes
SSH_KNOWN_HOSTS SSH known_hosts entry production-ip ecdsa-sha2-nistp256... Yes
PODMAN_CLIENT_IMG_DIGEST Pinned Podman client image digest quay.io/podman/stable@sha256:... Yes
RUST_IMG_DIGEST Pinned Rust image digest docker.io/library/rust@sha256:... Yes
NODE_IMG_DIGEST Pinned Node.js image digest docker.io/library/node@sha256:... Yes
POSTGRES_IMG_DIGEST Pinned PostgreSQL image digest docker.io/library/postgres@sha256:... Yes

How It Works

CI Pipeline Changes

The CI pipeline has been updated to:

  1. Login to Forgejo Container Registry using the provided credentials
  2. Build images with tags like REGISTRY_HOST/owner/repo/backend:GIT_SHA
  3. Push images to Forgejo's built-in registry
  4. Optionally sign images with Cosign if keys are provided
  5. Deploy from Forgejo registry instead of custom registry

Anonymous Pulls

Since the repository is public, applications can pull images anonymously:

# Pull by tag (run as PROD_SERVICE_USER in production)
sudo -u PROD_SERVICE_USER podman pull forgejo.example.com/owner/repo/backend:latest

# Pull by digest (run as PROD_SERVICE_USER in production)
sudo -u PROD_SERVICE_USER podman pull forgejo.example.com/owner/repo/backend@sha256:abc123...

Image Naming Convention

Images are stored in Forgejo's registry using this format:

  • REGISTRY_HOST/OWNER_REPO/backend:TAG
  • REGISTRY_HOST/OWNER_REPO/frontend:TAG
  • REGISTRY_HOST/OWNER_REPO/base-image:TAG

For example:

  • forgejo.example.com/devteam/sharenet/backend:abc123
  • forgejo.example.com/devteam/sharenet/frontend:abc123

Benefits of Forgejo Container Registry

  • Simplified setup - No custom registry installation required
  • Integrated security - Uses Forgejo's built-in authentication
  • Automatic HTTPS - No certificate management needed
  • Built-in UI - View and manage images through Forgejo web interface
  • Anonymous pulls - Public repositories allow unauthenticated pulls
  • Same CI approach - Maintains existing Podman Pods workflow

Troubleshooting

Common Issues

  1. Authentication failures:

    • Verify REGISTRY_TOKEN has write:packages scope
    • Check REGISTRY_HOST is correct (no protocol, just hostname)
  2. Pull failures:

    • Ensure repository/org is public for anonymous pulls
    • Verify image tags exist in Forgejo registry
  3. Cosign signing failures:

    • Check COSIGN_PRIVATE_KEY and COSIGN_PASSWORD are set
    • Verify private key format is correct

Verification Commands

# Test registry access
podman login REGISTRY_HOST -u REGISTRY_USERNAME -p REGISTRY_TOKEN

# List available images (run as PROD_SERVICE_USER in production)
sudo -u PROD_SERVICE_USER podman search REGISTRY_HOST/OWNER_REPO

# Pull and verify image (run as PROD_SERVICE_USER in production)
sudo -u PROD_SERVICE_USER podman pull REGISTRY_HOST/OWNER_REPO/backend:TAG
sudo -u PROD_SERVICE_USER podman image inspect REGISTRY_HOST/OWNER_REPO/backend:TAG

🎉 Congratulations!

You have successfully set up a complete CI/CD pipeline with:

  • Automated testing on every code push in isolated DinD environment
  • Docker image building and Forgejo Container Registry storage
  • Automated deployment to production
  • Health monitoring and logging
  • Backup and cleanup automation
  • Security hardening with proper user separation
  • SSL/TLS support with self-signed certificates and mTLS authentication
  • Zero resource contention between CI/CD and Forgejo Container Registry
  • FHS-compliant directory structure for better organization and security
  • Robust rootless services via systemd user manager
  • Host TLS reverse proxy with rootless registry isolation

Your application is now ready for continuous deployment with proper security, monitoring, and maintenance procedures in place!

Cleanup Installation Files

After successful setup, you can clean up the installation files to remove sensitive information:

Security Note: Forgejo Container Registry uses built-in authentication. Ensure your Personal Access Tokens are stored securely and never committed to version control.

  • Rotate tokens regularly by generating new Personal Access Tokens
  • Use minimal permissions - only grant write:packages scope for CI/CD operations

Step 7.4 CI/CD Workflow Summary Table

Stage What Runs How/Where
Test All integration/unit tests ci-pod.yaml
Build Build & push images Podman build & push
Deploy Deploy to production prod-pod.yml

How it works:

  • Test: The workflow spins up a full test environment using ci-pod.yaml (Postgres, backend, frontend, etc.) and runs all tests inside containers.
  • Build: If tests pass, the workflow uses Podman to build backend and frontend images and push them to Forgejo Container Registry.
  • Deploy: The production runner pulls images from Forgejo Container Registry and deploys the stack using prod-pod.yml.

Expected Output:

  • Each stage runs in its own isolated environment.
  • Test failures stop the pipeline before any images are built or deployed.
  • Only tested images are deployed to production.

Manual Testing with Podman Pods

You can use the same test environment locally that the CI pipeline uses for integration testing. This is useful for debugging, development, or verifying your setup before pushing changes.

Note: Since the CI pipeline runs tests inside a DinD container, local testing requires a similar setup.

Start the Test Environment (Local Development)

For local development testing, you can run the test environment directly:

# Start the test environment locally
podman play kube ci-pod.yaml

# Check service health
podman pod ps

Important: This local setup is for development only. The CI pipeline uses a more isolated DinD environment.

Run Tests Manually

You can now exec into the containers to run tests or commands as needed. For example:

# Run backend tests
docker exec ci-cd-test-rust cargo test --all

# Run frontend tests
docker exec ci-cd-test-node npm run test

Cleanup

When you're done, stop and remove all test containers:

podman pod stop ci-cd-test-pod && podman pod rm ci-cd-test-pod

Tip: The CI pipeline uses the same test containers but runs them inside a DinD environment for complete isolation.