74 KiB
CI/CD Pipeline Setup Guide
This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments using Docker-in-Docker (DinD) for isolated CI operations.
Architecture Overview
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Forgejo Host │ │ CI/CD Linode │ │ Production Linode│
│ (Repository) │ │ (Actions Runner)│ │ (Docker Deploy) │
│ │ │ + Docker Registry│ │ │
│ │ │ + DinD Container│ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ │
└─────────── Push ──────┼───────────────────────┘
│
└─── Deploy ────────────┘
Pipeline Flow
- Code Push: Developer pushes code to Forgejo repository
- Automated Testing: CI/CD Linode runs tests in isolated DinD environment
- Image Building: If tests pass, Docker images are built within DinD
- Registry Push: Images are pushed to Docker Registry from DinD
- Production Deployment: Production Linode pulls images and deploys
- Health Check: Application is verified and accessible
Key Benefits of DinD Approach
For Rust Testing:
- ✅ Fresh environment every test run
- ✅ Parallel execution capability
- ✅ Isolated dependencies - no test pollution
- ✅ Fast cleanup - just restart DinD container
For CI/CD Operations:
- ✅ Zero resource contention with Docker Registry
- ✅ Simple cleanup - one-line container restart
- ✅ Perfect isolation - CI/CD can't affect Docker Registry
- ✅ Consistent environment - same setup every time
For Maintenance:
- ✅ Reduced complexity - no complex cleanup scripts
- ✅ Easy debugging - isolated environment
- ✅ Reliable operations - no interference between services
Prerequisites
- Two Ubuntu 24.04 LTS Linodes with root access
- Basic familiarity with Linux commands and SSH
- Forgejo repository with Actions enabled
- Optional: Domain name for Production Linode (for SSL/TLS)
Quick Start
- Set up CI/CD Linode (Steps 0-9)
- Set up Production Linode (Steps 10-16)
- Set up Forgejo repository secrets (Step 17)
- Test the complete pipeline (Step 18)
What's Included
CI/CD Linode Features
- Forgejo Actions runner for automated builds
- Docker-in-Docker (DinD) container for isolated CI operations
- Docker Registry with Caddy reverse proxy for image storage
- Unauthenticated pulls, authenticated pushes
- Automatic HTTPS with Caddy
- Secure SSH communication with production
- Simplified cleanup - just restart DinD container
Production Linode Features
- Docker-based application deployment
- Optional SSL/TLS certificate management (if domain is provided)
- Nginx reverse proxy with security headers
- Automated backups and monitoring
- Firewall and fail2ban protection
Pipeline Features
- Automated testing on every code push in isolated environment
- Automated image building and registry push from DinD
- Automated deployment to production
- Rollback capability with image versioning
- Health monitoring and logging
- Zero resource contention between CI/CD and Docker Registry
Security Model and User Separation
This setup uses a principle of least privilege approach with separate users for different purposes:
User Roles
-
Root User
- Purpose: Initial system setup only
- SSH Access: Disabled after setup
- Privileges: Full system access (used only during initial configuration)
-
Deployment User (
CI_DEPLOY_USER
on CI Linode,PROD_DEPLOY_USER
on Production Linode)- Purpose: SSH access, deployment tasks, system administration
- SSH Access: Enabled with key-based authentication
- Privileges: Sudo access for deployment and administrative tasks
- Example:
ci-deploy
/prod-deploy
-
Service Account (
CI_SERVICE_USER
on CI Linode,PROD_SERVICE_USER
on Production Linode)- Purpose: Running application services (Docker containers, databases)
- SSH Access: None (no login shell)
- Privileges: No sudo access, minimal system access
- Example:
ci-service
/prod-service
Security Benefits
- No root SSH access: Eliminates the most common attack vector
- Principle of least privilege: Each user has only the access they need
- Separation of concerns: Deployment tasks vs. service execution are separate
- Audit trail: Clear distinction between deployment and service activities
- Reduced attack surface: Service account has minimal privileges
File Permissions
- Application files: Owned by
CI_SERVICE_USER
for security (CI Linode) /PROD_SERVICE_USER
for security (Production Linode) - Docker operations: Run by
CI_SERVICE_USER
with Docker group access (CI Linode) /PROD_SERVICE_USER
with Docker group access (Production Linode) - Service execution: Run by
CI_SERVICE_USER
(no sudo needed) /PROD_SERVICE_USER
(no sudo needed)
Prerequisites and Initial Setup
What's Already Done (Assumptions)
This guide assumes you have already:
- Created two Ubuntu 24.04 LTS Linodes with root access
- Set root passwords for both Linodes
- Have SSH client installed on your local machine
- Have Forgejo repository with Actions enabled
- Optional: Domain name pointing to Production Linode's IP addresses
Step 0: Initial SSH Access and Verification
Before proceeding with the setup, you need to establish initial SSH access to both Linodes.
0.1 Get Your Linode IP Addresses
From your Linode dashboard, note the IP addresses for:
- CI/CD Linode:
YOUR_CI_CD_IP
(IP address only, no domain needed) - Production Linode:
YOUR_PRODUCTION_IP
(IP address for SSH, domain for web access)
0.2 Test Initial SSH Access
Test SSH access to both Linodes:
# Test CI/CD Linode (IP address only)
ssh root@YOUR_CI_CD_IP
# Test Production Linode (IP address only)
ssh root@YOUR_PRODUCTION_IP
Expected output: SSH login prompt asking for root password.
If something goes wrong:
- Verify the IP addresses are correct
- Check that SSH is enabled on the Linodes
- Ensure your local machine can reach the Linodes (no firewall blocking)
0.3 Choose Your Names
Before proceeding, decide on:
-
CI Service Account Name: Choose a username for the CI service account (e.g.,
ci-service
)- Replace
CI_SERVICE_USER
in this guide with your chosen name - This account runs the CI pipeline and Docker operations on the CI Linode
- Replace
-
CI Deployment User Name: Choose a username for CI deployment tasks (e.g.,
ci-deploy
)- Replace
CI_DEPLOY_USER
in this guide with your chosen name - This account has sudo privileges for deployment tasks
- Replace
-
Application Name: Choose a name for your application (e.g.,
sharenet
)- Replace
APP_NAME
in this guide with your chosen name
- Replace
-
Domain Name (Optional): If you have a domain, note it for SSL configuration
- Replace
your-domain.com
in this guide with your actual domain
- Replace
Example:
- If you choose
ci-service
as CI service account,ci-deploy
as CI deployment user, andsharenet
as application name:- Replace all
CI_SERVICE_USER
withci-service
- Replace all
CI_DEPLOY_USER
withci-deploy
- Replace all
APP_NAME
withsharenet
- If you have a domain
example.com
, replaceyour-domain.com
withexample.com
- Replace all
Security Model:
- CI Service Account (
CI_SERVICE_USER
): Runs CI pipeline and Docker operations, no sudo access - CI Deployment User (
CI_DEPLOY_USER
): Handles SSH communication and orchestration, has sudo access - Root: Only used for initial setup, then disabled for SSH access
0.4 Set Up SSH Key Authentication for Local Development
Important: This step should be done on both Linodes to enable secure SSH access from your local development machine.
0.4.1 Generate SSH Key on Your Local Machine
On your local development machine, generate an SSH key pair:
# Generate SSH key pair (if you don't already have one)
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""
# Or use existing key if you have one
ls ~/.ssh/id_ed25519.pub
0.4.2 Add Your Public Key to Both Linodes
Copy your public key to both Linodes:
# Copy your public key to CI/CD Linode
ssh-copy-id root@YOUR_CI_CD_IP
# Copy your public key to Production Linode
ssh-copy-id root@YOUR_PRODUCTION_IP
Alternative method (if ssh-copy-id doesn't work):
# Copy your public key content
cat ~/.ssh/id_ed25519.pub
# Then manually add to each server
ssh root@YOUR_CI_CD_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
ssh root@YOUR_PRODUCTION_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
0.4.3 Test SSH Key Authentication
Test that you can access both servers without passwords:
# Test CI/CD Linode
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'
# Test Production Linode
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'
Expected output: The echo messages should appear without password prompts.
0.4.4 Create Deployment Users
On both Linodes, create the deployment user with sudo privileges:
For CI Linode:
# Create CI deployment user
sudo useradd -m -s /bin/bash CI_DEPLOY_USER
sudo usermod -aG sudo CI_DEPLOY_USER
# Set a secure password (for emergency access only)
echo "CI_DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
# Copy your SSH key to the CI deployment user
sudo mkdir -p /home/CI_DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/CI_DEPLOY_USER/.ssh/
sudo chown -R CI_DEPLOY_USER:CI_DEPLOY_USER /home/CI_DEPLOY_USER/.ssh
sudo chmod 700 /home/CI_DEPLOY_USER/.ssh
sudo chmod 600 /home/CI_DEPLOY_USER/.ssh/authorized_keys
# Configure sudo to use SSH key authentication (most secure)
echo "CI_DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/CI_DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/CI_DEPLOY_USER
For Production Linode:
# Create production deployment user
sudo useradd -m -s /bin/bash PROD_DEPLOY_USER
sudo usermod -aG sudo PROD_DEPLOY_USER
# Set a secure password (for emergency access only)
echo "PROD_DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
# Copy your SSH key to the production deployment user
sudo mkdir -p /home/PROD_DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/PROD_DEPLOY_USER/.ssh/
sudo chown -R PROD_DEPLOY_USER:PROD_DEPLOY_USER /home/PROD_DEPLOY_USER/.ssh
sudo chmod 700 /home/PROD_DEPLOY_USER/.ssh
sudo chmod 600 /home/PROD_DEPLOY_USER/.ssh/authorized_keys
# Configure sudo to use SSH key authentication (most secure)
echo "PROD_DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/PROD_DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/PROD_DEPLOY_USER
Security Note: This configuration allows the deployment users to use sudo without a password, which is more secure for CI/CD automation since there are no passwords to store or expose. The random password is set for emergency console access only.
0.4.5 Test Sudo Access
Test that the deployment users can use sudo without password prompts:
# Test CI deployment user sudo access
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'sudo whoami'
# Test production deployment user sudo access
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'sudo whoami'
Expected output: Both commands should return root
without prompting for a password.
0.4.6 Test Deployment User Access
Test that you can access both servers as the deployment users:
# Test CI/CD Linode
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'echo "CI deployment user SSH access works for CI/CD"'
# Test Production Linode
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Production deployment user SSH access works for Production"'
Expected output: The echo messages should appear without password prompts.
0.4.7 Create SSH Config for Easy Access
On your local machine, create an SSH config file for easy access:
# Create SSH config
cat > ~/.ssh/config << 'EOF'
Host ci-cd-dev
HostName YOUR_CI_CD_IP
User CI_DEPLOY_USER
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking no
Host production-dev
HostName YOUR_PRODUCTION_IP
User PROD_DEPLOY_USER
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking no
EOF
chmod 600 ~/.ssh/config
Now you can access servers easily:
ssh ci-cd-dev
ssh production-dev
0.4.8 Secure SSH Configuration
Critical Security Step: After setting up SSH key authentication, you must disable password authentication and root login to secure your servers.
For Both CI/CD and Production Linodes:
Step 1: Edit SSH Configuration File
# Open the SSH configuration file using nano
sudo nano /etc/ssh/sshd_config
Step 2: Disallow Root Logins
Find the line that says:
#PermitRootLogin prohibit-password
Change it to:
PermitRootLogin no
Step 3: Disable Password Authentication
Find the line that says:
#PasswordAuthentication yes
Change it to:
PasswordAuthentication no
Step 4: Configure Protocol Family (Optional)
If you only need IPv4 connections, find or add:
#AddressFamily any
Change it to:
AddressFamily inet
Step 5: Save and Exit
- Press
Ctrl + X
to exit - Press
Y
to confirm saving - Press
Enter
to confirm the filename
Step 6: Test SSH Configuration
# Test the SSH configuration for syntax errors
sudo sshd -t
Step 7: Restart SSH Service
For Ubuntu 22.10+ (socket-based activation):
sudo systemctl enable --now ssh.service
For other distributions:
sudo systemctl restart sshd
Step 8: Verify SSH Access
IMPORTANT: Test SSH access from a new terminal window before closing your current session:
# Test CI/CD Linode
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'echo "SSH configuration test successful"'
# Test Production Linode
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "SSH configuration test successful"'
What these changes do:
PermitRootLogin no
: Completely disables root SSH accessPasswordAuthentication no
: Disables password-based authenticationAddressFamily inet
: Listens only on IPv4 (optional, for additional security)
Security Benefits:
- No root access: Eliminates the most common attack vector
- Key-only authentication: Prevents brute force password attacks
- Protocol restriction: Limits SSH to IPv4 only (if configured)
Emergency Access:
If you lose SSH access, you can still access the server through:
- Linode Console: Use the Linode dashboard's console access
- Emergency mode: Boot into single-user mode if needed
Verification Commands:
# Check SSH configuration
sudo grep -E "(PermitRootLogin|PasswordAuthentication|AddressFamily)" /etc/ssh/sshd_config
# Check SSH service status
sudo systemctl status ssh
# Check SSH logs for any issues
sudo journalctl -u ssh -f
# Test SSH access from a new session
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'whoami'
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'whoami'
Expected Output:
PermitRootLogin no
PasswordAuthentication no
AddressFamily inet
(if configured)- SSH service should be "active (running)"
- Test commands should return the deployment user names
Important Security Notes:
- Test before closing: Always test SSH access from a new session before closing your current SSH connection
- Keep backup: You can restore the original configuration if needed
- Monitor logs: Check
/var/log/auth.log
for SSH activity and potential attacks - Regular updates: Keep SSH and system packages updated for security patches
Alternative: Manual Configuration with Backup
If you prefer to manually edit the file with a backup:
# Create backup
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup
# Edit the file
sudo nano /etc/ssh/sshd_config
# Test configuration
sudo sshd -t
# Restart service
sudo systemctl restart ssh
Part 1: CI/CD Linode Setup
Step 1: Initial System Setup
1.1 Update the System
sudo apt update && sudo apt upgrade -y
What this does: Updates package lists and upgrades all installed packages to their latest versions.
Expected output: A list of packages being updated, followed by completion messages.
1.2 Configure Timezone
# Configure timezone interactively
sudo dpkg-reconfigure tzdata
# Verify timezone setting
date
What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
Expected output: After selecting your timezone, the date
command should show the current date and time in your selected timezone.
1.3 Configure /etc/hosts
# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
# Verify the configuration
cat /etc/hosts
What this does:
- Adds localhost entries for both IPv4 and IPv6 addresses to
/etc/hosts
- Ensures proper localhost resolution for both IPv4 and IPv6
Important: Replace YOUR_CI_CD_IPV4_ADDRESS
and YOUR_CI_CD_IPV6_ADDRESS
with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.
Expected output: The /etc/hosts
file should show entries for 127.0.0.1
, ::1
, and your Linode's actual IP addresses all mapping to localhost
.
1.4 Install Essential Packages
sudo apt install -y \
curl \
wget \
git \
build-essential \
pkg-config \
libssl-dev \
ca-certificates \
apt-transport-https \
software-properties-common \
apache2-utils
What this does: Installs development tools, SSL libraries, and utilities needed for Docker and application building.
Step 2: Create Users
2.1 Create CI Service Account
# Create dedicated group for the CI service account
sudo groupadd -r CI_SERVICE_USER
# Create CI service account user with dedicated group
sudo useradd -r -g CI_SERVICE_USER -s /bin/bash -m -d /home/CI_SERVICE_USER CI_SERVICE_USER
echo "CI_SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
2.2 Verify Users
sudo su - CI_SERVICE_USER
whoami
pwd
exit
sudo su - CI_DEPLOY_USER
whoami
pwd
exit
Step 3: Clone Repository for Registry Configuration
3.1 Clone Repository
# Switch to CI_DEPLOY_USER (who has sudo access)
sudo su - CI_DEPLOY_USER
# Create application directory and clone repository
sudo mkdir -p /opt/APP_NAME
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/APP_NAME
cd /opt
sudo git clone https://your-forgejo-instance/your-username/APP_NAME.git
sudo chown -R CI_SERVICE_USER:CI_SERVICE_USER APP_NAME/
# Verify the registry folder exists
ls -la /opt/APP_NAME/registry/
Important: Replace your-forgejo-instance
, your-username
, and APP_NAME
with your actual Forgejo instance URL, username, and application name.
What this does:
- CI_DEPLOY_USER creates the directory structure and clones the repository
- CI_SERVICE_USER owns all the files for security
- Registry configuration files are now available at
/opt/APP_NAME/registry/
Step 4: Install Docker
4.1 Add Docker Repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
4.2 Install Docker Packages
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
4.3 Configure Docker for CI Service Account
sudo usermod -aG docker CI_SERVICE_USER
Step 5: Set Up Docker Registry with Caddy
We'll set up a basic Docker Registry with Caddy as a reverse proxy, configured to allow unauthenticated pulls but require authentication for pushes.
5.1 Configure Registry Directory for CI_SERVICE_USER
# Create registry directory structure
sudo mkdir -p /opt/registry
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/registry
sudo chmod 755 /opt/registry
5.2 Create Docker Compose Setup
# Create registry directory structure (if not already created)
sudo mkdir -p /opt/registry
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/registry
cd /opt/registry
# Copy registry configuration from repository
# The registry folder contains the Docker Compose and Caddy configuration files
sudo cp /opt/APP_NAME/registry/docker-compose.registry.yml docker-compose.yml
sudo cp /opt/APP_NAME/registry/Caddyfile Caddyfile
# Update Caddyfile with your actual IP address
sudo sed -i "s/YOUR_CI_CD_IP/YOUR_ACTUAL_IP_ADDRESS/g" Caddyfile
# Create environment file for registry authentication
# First, create a secure password hash
REGISTRY_PASSWORD="your-secure-registry-password"
REGISTRY_PASSWORD_HASH=$(htpasswd -nbB registry-user "$REGISTRY_PASSWORD" | cut -d: -f2)
sudo tee .env << EOF
REGISTRY_USERNAME=registry-user
REGISTRY_PASSWORD_HASH=$REGISTRY_PASSWORD_HASH
EOF
# Set proper permissions
sudo chown CI_SERVICE_USER:CI_SERVICE_USER .env
sudo chmod 600 .env
5.3 Configure Docker Registry
# Create registry data directory
sudo mkdir -p /opt/registry/data
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/registry/data
# Copy registry configuration from repository
sudo cp /opt/APP_NAME/registry/config.yml /opt/registry/config.yml
# Update the baseurl with your actual IP address
sudo sed -i "s/YOUR_CI_CD_IP/YOUR_ACTUAL_IP_ADDRESS/g" /opt/registry/config.yml
# Set proper permissions
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/registry/config.yml
5.4 Generate TLS Certificate and Install in Docker Trust Store
Choose one of the following options based on whether you have a domain name:
Option A: Self-Signed Certificate (No Domain Required)
Perform all of these steps if you do NOT have a domain name:
# 1. Generate self-signed certificate
sudo mkdir -p /opt/registry/certs
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/registry/certs
cd /opt/registry/certs
sudo -u CI_SERVICE_USER openssl genrsa -out registry.key 4096
sudo -u CI_SERVICE_USER openssl req -new -key registry.key \
-out registry.csr \
-subj "/C=US/ST=State/L=City/O=Organization/OU=IT/CN=YOUR_ACTUAL_IP_ADDRESS"
sudo -u CI_SERVICE_USER tee registry.conf > /dev/null << EOF
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[req_distinguished_name]
C = US
ST = State
L = City
O = Organization
OU = IT
CN = YOUR_ACTUAL_IP_ADDRESS
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = YOUR_ACTUAL_IP_ADDRESS
DNS.2 = localhost
IP.1 = YOUR_ACTUAL_IP_ADDRESS
IP.2 = 127.0.0.1
EOF
sudo -u CI_SERVICE_USER openssl x509 -req -in registry.csr \
-signkey registry.key \
-out registry.crt \
-days 365 \
-extensions v3_req \
-extfile registry.conf
sudo chmod 600 registry.key
sudo chmod 644 registry.crt
sudo -u CI_SERVICE_USER openssl x509 -in registry.crt -text -noout
# 2. Install certificate into Docker trust store
sudo mkdir -p /etc/docker/certs.d/registry
sudo cp /opt/registry/certs/registry.crt /etc/docker/certs.d/registry/ca.crt
sudo cp /opt/registry/certs/registry.crt /usr/local/share/ca-certificates/registry-ca.crt
sudo update-ca-certificates
sudo systemctl restart docker
Option B: Let's Encrypt Certificate (Domain Required)
Perform all of these steps if you DO have a domain name:
# 1. Generate Let's Encrypt certificate
sudo apt update
sudo apt install -y certbot python3-certbot-nginx
sudo certbot certonly --standalone \
--email your-email@example.com \
--agree-tos \
--no-eff-email \
-d YOUR_DOMAIN_NAME
sudo certbot certificates
sudo mkdir -p /opt/registry/certs
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/registry/certs
sudo cp /etc/letsencrypt/live/YOUR_DOMAIN_NAME/fullchain.pem /opt/registry/certs/registry.crt
sudo cp /etc/letsencrypt/live/YOUR_DOMAIN_NAME/privkey.pem /opt/registry/certs/registry.key
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/registry/certs/registry.crt
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/registry/certs/registry.key
sudo chmod 644 /opt/registry/certs/registry.crt
sudo chmod 600 /opt/registry/certs/registry.key
# 2. Install certificate into Docker trust store
sudo mkdir -p /etc/docker/certs.d/YOUR_DOMAIN_NAME
sudo cp /opt/registry/certs/registry.crt /etc/docker/certs.d/YOUR_DOMAIN_NAME/ca.crt
sudo systemctl restart docker
Note:
- For Option A: Replace
YOUR_ACTUAL_IP_ADDRESS
with your server's IP address. - For Option B: Replace
YOUR_DOMAIN_NAME
with your domain name andyour-email@example.com
with your email address.
After completing the steps for your chosen option, continue with Step 5.7 (Start Docker Registry with Docker Compose).
5.5 Install Certificate into Docker Trust Store (Option B Only)
Important: This step adds the Let's Encrypt certificate to Docker's trust store. Since Let's Encrypt is a trusted CA, Docker will automatically trust this certificate.
# Create Docker certificates directory for your domain
sudo mkdir -p /etc/docker/certs.d/YOUR_DOMAIN_NAME
# Copy certificate to Docker trust store
sudo cp /opt/registry/certs/registry.crt /etc/docker/certs.d/YOUR_DOMAIN_NAME/ca.crt
# Restart Docker daemon to pick up the new certificate
sudo systemctl restart docker
# Verify certificate installation
if [ -f "/etc/docker/certs.d/YOUR_DOMAIN_NAME/ca.crt" ]; then
echo "✅ Let's Encrypt certificate installed in Docker trust store"
else
echo "❌ Failed to install certificate in Docker trust store"
exit 1
fi
echo "Certificate installation completed successfully!"
echo "Docker can now connect to the registry securely using your domain name"
5.6 Set Up Automatic Certificate Renewal (Option B Only)
Important: Let's Encrypt certificates expire after 90 days, so we need to set up automatic renewal.
# Test automatic renewal
sudo certbot renew --dry-run
# Set up automatic renewal cron job
sudo crontab -e
# Add this line to renew certificates twice daily (Let's Encrypt allows renewal 30 days before expiry):
# 0 12,18 * * * /usr/bin/certbot renew --quiet --post-hook "cp /etc/letsencrypt/live/YOUR_DOMAIN_NAME/fullchain.pem /opt/registry/certs/registry.crt && cp /etc/letsencrypt/live/YOUR_DOMAIN_NAME/privkey.pem /opt/registry/certs/registry.key && chown CI_SERVICE_USER:CI_SERVICE_USER /opt/registry/certs/registry.* && chmod 644 /opt/registry/certs/registry.crt && chmod 600 /opt/registry/certs/registry.key && systemctl restart docker-registry.service"
echo "Automatic certificate renewal configured!"
echo "Certificates will be renewed automatically and the registry service will be restarted"
5.7 Start Docker Registry with Docker Compose
# Switch to CI_SERVICE_USER
sudo su - CI_SERVICE_USER
# Navigate to registry directory
cd /opt/registry
# Copy updated Docker Compose and Caddy configuration with certificate support
sudo cp /opt/APP_NAME/registry/docker-compose.registry.yml docker-compose.yml
sudo cp /opt/APP_NAME/registry/Caddyfile Caddyfile
# Update Caddyfile with your actual IP address
sudo sed -i "s/YOUR_CI_CD_IP/YOUR_ACTUAL_IP_ADDRESS/g" Caddyfile
# Start the Docker Registry and Caddy services
docker compose up -d
# Verify services are running
docker compose ps
# Exit CI_SERVICE_USER shell
exit
5.8 Create Systemd Service for Docker Compose
# Create systemd service file for Docker Registry with Docker Compose
sudo tee /etc/systemd/system/docker-registry.service << EOF
[Unit]
Description=Docker Registry with Caddy
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
User=CI_SERVICE_USER
Group=CI_SERVICE_USER
WorkingDirectory=/opt/registry
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
ExecReload=/usr/bin/docker compose down && /usr/bin/docker compose up -d
[Install]
WantedBy=multi-user.target
EOF
# Enable and start Docker Registry service
sudo systemctl daemon-reload
sudo systemctl enable docker-registry.service
sudo systemctl start docker-registry.service
# Monitor startup
sudo journalctl -u docker-registry.service -f
5.9 Test Registry Setup
# Switch to CI_SERVICE_USER for testing (CI_SERVICE_USER runs CI pipeline and Docker operations)
sudo su - CI_SERVICE_USER
# Test Docker login and push (now using Let's Encrypt certificate with domain)
echo "your-secure-registry-password" | docker login YOUR_DOMAIN_NAME -u registry-user --password-stdin
# Create and push test image
echo "FROM alpine:latest" > /tmp/test.Dockerfile
docker build -f /tmp/test.Dockerfile -t YOUR_DOMAIN_NAME/APP_NAME/test:latest /tmp
docker push YOUR_DOMAIN_NAME/APP_NAME/test:latest
# Test public pull (no authentication)
docker logout YOUR_DOMAIN_NAME
docker pull YOUR_DOMAIN_NAME/APP_NAME/test:latest
# Test that unauthorized push is blocked
echo "FROM alpine:latest" > /tmp/unauthorized.Dockerfile
docker build -f /tmp/unauthorized.Dockerfile -t YOUR_DOMAIN_NAME/APP_NAME/unauthorized:latest /tmp
docker push YOUR_DOMAIN_NAME/APP_NAME/unauthorized:latest
# Expected: This should fail with authentication error
# Clean up
docker rmi YOUR_DOMAIN_NAME/APP_NAME/test:latest
docker rmi YOUR_DOMAIN_NAME/APP_NAME/unauthorized:latest
exit
Expected behavior:
- ✅ Push requires authentication with
registry-user
credentials - ✅ Pull works without authentication (public read access)
- ✅ Unauthorized push is blocked
- ✅ Registry accessible at
https://YOUR_DOMAIN_NAME
with valid Let's Encrypt certificate - ✅ No insecure registry configuration needed
- ✅ Certificate automatically renews every 60 days
Step 6: Install Forgejo Actions Runner
6.1 Download Runner
Important: Run this step as the CI_DEPLOY_USER (not root or CI_SERVICE_USER). The CI_DEPLOY_USER handles deployment tasks including downloading and installing the Forgejo runner.
cd ~
# Get the latest version dynamically
LATEST_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases | jq -r '.[0].tag_name')
echo "Downloading Forgejo runner version: $LATEST_VERSION"
# Download the latest runner
wget https://code.forgejo.org/forgejo/runner/releases/download/${LATEST_VERSION}/forgejo-runner-${LATEST_VERSION#v}-linux-amd64
chmod +x forgejo-runner-${LATEST_VERSION#v}-linux-amd64
sudo mv forgejo-runner-${LATEST_VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
Alternative: Pin to Specific Version (Recommended for Production)
If you prefer to pin to a specific version for stability, replace the dynamic download with:
cd ~
VERSION="v6.3.1" # Pin to specific version
wget https://code.forgejo.org/forgejo/runner/releases/download/${VERSION}/forgejo-runner-${VERSION#v}-linux-amd64
chmod +x forgejo-runner-${VERSION#v}-linux-amd64
sudo mv forgejo-runner-${VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
What this does:
- Dynamic approach: Downloads the latest stable Forgejo Actions runner
- Version pinning: Allows you to specify a known-good version for production
- System installation: Installs the binary system-wide in
/usr/bin/
for proper Linux structure - Makes the binary executable and available system-wide
Production Recommendation: Use version pinning in production environments to ensure consistency and avoid unexpected breaking changes.
6.2 Register Runner
Important: The runner must be registered with your Forgejo instance before it can start. This creates the required .runner
configuration file.
Step 1: Get Permissions to Create Repository-level Runners
To create a repository-level runner, you need Repository Admin or Owner permissions. Here's how to check and manage permissions:
Check Your Current Permissions:
- Go to your repository:
https://your-forgejo-instance/your-username/your-repo
- Look for the Settings tab in the repository navigation
- If you see Actions in the left sidebar under Settings, you have the right permissions
- If you don't see Settings or Actions, you don't have admin access
Add Repository Admin (Repository Owner Only):
If you're the repository owner and need to give someone else admin access:
-
Go to Repository Settings:
- Navigate to your repository
- Click Settings tab
- Click Collaborators in the left sidebar
-
Add Collaborator:
- Click Add Collaborator button
- Enter the username or email of the person you want to add
- Select Admin from the role dropdown
- Click Add Collaborator
-
Alternative: Manage Team Access (for Organizations):
- Go to Settings → Collaborators
- Click Manage Team Access
- Add the team with Admin permissions
Repository Roles and Permissions:
Role | Can Create Runners | Can Manage Repository | Can Push Code |
---|---|---|---|
Owner | ✅ Yes | ✅ Yes | ✅ Yes |
Admin | ✅ Yes | ✅ Yes | ✅ Yes |
Write | ❌ No | ❌ No | ✅ Yes |
Read | ❌ No | ❌ No | ❌ No |
If You Don't Have Permissions:
Option 1: Ask Repository Owner
- Contact the person who owns the repository
- Ask them to create the runner and share the registration token with you
Option 2: Use Organization/User Runner
- If you have access to organization settings, create an org-level runner
- Or create a user-level runner if you own other repositories
Option 3: Site Admin Help
- Contact your Forgejo instance administrator to create a site-level runner
Site Administrator: Setting Repository Admin (Forgejo Instance Admin)
To add an existing user as an Administrator of an existing repository in Forgejo, follow these steps:
- Go to the repository: Navigate to the main page of the repository you want to manage.
- Access repository settings: Click on the "Settings" tab under your repository name.
- Go to Collaborators & teams: In the sidebar, under the "Access" section, click on "Collaborators & teams".
- Manage access: Under "Manage access", locate the existing user you want to make an administrator.
- Change their role: Next to the user's name, select the "Role" dropdown menu and click on "Administrator".
Important Note: If the user is already the Owner of the repository, then they do not have to add themselves as an Administrator of the repository and indeed cannot. Repository owners automatically have all administrative permissions.
Important Notes for Site Administrators:
- Repository Admin can manage the repository but cannot modify site-wide settings
- Site Admin retains full control over the Forgejo instance
- Changes take effect immediately for the user
- Consider the security implications of granting admin access
Step 2: Get Registration Token
- Go to your Forgejo repository
- Navigate to Settings → Actions → Runners
- Click "New runner"
- Copy the registration token
Step 3: Register the Runner
# Switch to CI_DEPLOY_USER to register the runner
sudo su - CI_DEPLOY_USER
cd ~
# Register the runner with your Forgejo instance
forgejo-runner register \
--instance https://your-forgejo-instance \
--token YOUR_REGISTRATION_TOKEN \
--name "ci-runner" \
--labels "ci" \
--no-interactive
Important: Replace your-forgejo-instance
with your actual Forgejo instance URL and YOUR_REGISTRATION_TOKEN
with the token you copied from Step 2. Also make sure it ends in a /
.
Note: The your-forgejo-instance
should be the base URL of your Forgejo instance (e.g., https://git.<your-domain>/
), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.
What this does:
- Creates the required
.runner
configuration file in the CI_DEPLOY_USER's home directory - Registers the runner with your Forgejo instance
- Sets up the runner with appropriate labels for Ubuntu and Docker environments
Step 4: Set Up System Configuration
# Create system config directory for Forgejo runner
sudo mkdir -p /etc/forgejo-runner
# Copy the runner configuration to system location
sudo mv /home/CI_DEPLOY_USER/.runner /etc/forgejo-runner/.runner
# Set proper ownership and permissions
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /etc/forgejo-runner/.runner
sudo chmod 600 /etc/forgejo-runner/.runner
What this does:
- Copies the configuration to the system location (
/etc/forgejo-runner/.runner
) - Sets proper ownership and permissions for CI_SERVICE_USER to access the config
- Registers the runner with your Forgejo instance
- Sets up the runner with appropriate labels for Ubuntu and Docker environments
Step 5: Create and Enable Systemd Service
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
[Unit]
Description=Forgejo Actions Runner
After=network.target
[Service]
Type=simple
User=CI_SERVICE_USER
WorkingDirectory=/etc/forgejo-runner
ExecStart=/usr/bin/forgejo-runner daemon
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
# Enable the service
sudo systemctl daemon-reload
sudo systemctl enable forgejo-runner.service
What this does:
- Creates the systemd service configuration for the Forgejo runner
- Sets the working directory to
/etc/forgejo-runner
where the.runner
configuration file is located - The runner will start here but the CI workflow will deploy the application to
/opt/APP_NAME
- Enables the service to start automatically on boot
- Sets up proper restart behavior for reliability
6.3 Start Service
# Start the Forgejo runner service
sudo systemctl start forgejo-runner.service
# Verify the service is running
sudo systemctl status forgejo-runner.service
Expected Output: The service should show "active (running)" status.
6.4 Test Runner Configuration
# Check if the runner is running
sudo systemctl status forgejo-runner.service
# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager
# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-runner" with status "Online"
Expected Output:
systemctl status
should show "active (running)"- Forgejo web interface should show the runner as online with "ci" label
If something goes wrong:
- Check logs:
sudo journalctl -u forgejo-runner.service -f
- Verify token: Make sure the registration token is correct
- Check network: Ensure the runner can reach your Forgejo instance
- Restart service:
sudo systemctl restart forgejo-runner.service
Step 7: Set Up Docker-in-Docker (DinD) for CI Operations
Important: This step sets up a Docker-in-Docker container that provides an isolated environment for CI/CD operations, eliminating resource contention with Harbor and simplifying cleanup.
7.1 Create Containerized CI/CD Environment
# Switch to CI_SERVICE_USER (who has Docker group access)
sudo su - CI_SERVICE_USER
# Navigate to the application directory
cd /opt/APP_NAME
# Start DinD container for isolated Docker operations
docker run -d \
--name ci-dind \
--privileged \
-p 2375:2375 \
-e DOCKER_TLS_CERTDIR="" \
docker:dind
# Wait for a minute or two for DinD to be ready (wait for Docker daemon inside DinD)
# Test DinD connectivity
docker exec ci-dind docker version
What this does:
- Creates isolated DinD environment: Provides isolated Docker environment for all CI/CD operations
- Health checks: Ensures DinD is fully ready before proceeding
- Simple setup: Direct Docker commands for maximum flexibility
Why CI_SERVICE_USER: The CI_SERVICE_USER is in the docker group and runs the CI pipeline, so it needs direct access to the DinD container for seamless CI/CD operations.
7.2 Configure DinD for Docker Registry
# Navigate to the application directory
cd /opt/APP_NAME
# Login to Docker Registry from within DinD
echo "your-registry-password" | docker exec -i ci-dind docker login YOUR_CI_CD_IP -u registry-user --password-stdin
# Test Docker Registry connectivity from DinD
docker exec ci-dind docker pull alpine:latest
docker exec ci-dind docker tag alpine:latest YOUR_CI_CD_IP/APP_NAME/test:latest
docker exec ci-dind docker push YOUR_CI_CD_IP/APP_NAME/test:latest
# Clean up test image
docker exec ci-dind docker rmi YOUR_CI_CD_IP/APP_NAME/test:latest
#### 7.3 Set Up Workspace Directory
**Important**: The CI workflow needs a workspace directory for code checkout. This directory will be used by the Forgejo Actions runner.
```bash
# Switch to CI_DEPLOY_USER (who has sudo privileges)
sudo su - CI_DEPLOY_USER
# Create workspace directory in /tmp with proper permissions
sudo mkdir -p /tmp/ci-workspace
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /tmp/ci-workspace
sudo chmod 755 /tmp/ci-workspace
# Verify the setup
ls -la /tmp/ci-workspace
What this does:
- Creates workspace: Provides a dedicated directory for CI operations
- Proper ownership: CI_SERVICE_USER owns the directory for write access
- Appropriate permissions: 755 allows read/write for owner, read for others
- Temporary location: Uses /tmp for easy cleanup and no persistence needed
Alternative locations (if you prefer):
/opt/ci-workspace
- More permanent location/home/CI_SERVICE_USER/workspace
- User's home directory/var/lib/ci-workspace
- System-managed location
Note: The CI workflow will use this directory for code checkout and then copy the contents to the DinD container.
**What this does**:
- **Configures certificate trust**: Properly sets up Harbor certificate trust in DinD
- **Fixes ownership issues**: Ensures certificate has correct ownership for CA trust
- **Tests connectivity**: Verifies DinD can pull, tag, and push images to Harbor
- **Validates setup**: Ensures the complete CI/CD pipeline will work
#### 7.3 CI/CD Workflow Architecture
The CI/CD pipeline uses a three-stage approach with dedicated environments for each stage:
**Job 1 (Testing) - `docker-compose.test.yml`:**
- **Purpose**: Comprehensive testing with multiple containers
- **Environment**: DinD with PostgreSQL, Rust, and Node.js containers
- **Code Checkout**: Code is checked out directly into the DinD container at `/workspace` from the Forgejo repository that triggered the build
- **Services**:
- PostgreSQL database for backend tests
- Rust toolchain for backend testing and migrations
- Node.js toolchain for frontend testing
- **Network**: All containers communicate through `ci-cd-test-network`
- **Setup**: DinD container created, Docker Registry login performed, code cloned into DinD from Forgejo
- **Cleanup**: Testing containers removed, DinD container kept running
**Job 2 (Building) - Direct Docker Commands:**
- **Purpose**: Image building and pushing to Docker Registry
- **Environment**: Same DinD container from Job 1
- **Code Access**: Reuses code from Job 1, updates to latest commit
- **Process**:
- Uses Docker Buildx for efficient building
- Builds backend and frontend images separately
- Pushes images to Docker Registry
- **Registry Access**: Reuses Docker Registry authentication from Job 1
- **Cleanup**: DinD container stopped and removed (clean slate for next run)
**Job 3 (Deployment) - `docker-compose.prod.yml`:**
- **Purpose**: Production deployment with pre-built images
- **Environment**: Production runner on Production Linode
- **Process**:
- Pulls images from Docker Registry
- Deploys complete application stack
- Verifies all services are healthy
- **Services**: PostgreSQL, backend, frontend, Nginx
**Key Benefits:**
- **🧹 Complete Isolation**: Each job has its own dedicated environment
- **🚫 No Resource Contention**: Testing and building don't interfere with Docker Registry
- **⚡ Consistent Environment**: Same setup every time
- **🎯 Purpose-Specific**: Each Docker Compose file serves a specific purpose
- **🔄 Parallel Safety**: Jobs can run safely in parallel
**Testing DinD Setup:**
```bash
# Test DinD functionality
docker exec ci-dind docker run --rm alpine:latest echo "DinD is working!"
# Test Docker Registry integration
docker exec ci-dind docker pull alpine:latest
docker exec ci-dind docker tag alpine:latest YOUR_CI_CD_IP/APP_NAME/dind-test:latest
docker exec ci-dind docker push YOUR_CI_CD_IP/APP_NAME/dind-test:latest
# Clean up test
docker exec ci-dind docker rmi YOUR_CI_CD_IP/APP_NAME/dind-test:latest
Expected Output:
- DinD container should be running and accessible
- Docker commands should work inside DinD
- Docker Registry push/pull should work from DinD
7.4 Production Deployment Architecture
The production deployment uses a separate Docker Compose file (docker-compose.prod.yml
) that pulls built images from the Docker Registry and deploys the complete application stack.
Production Stack Components:
- PostgreSQL: Production database with persistent storage
- Backend: Rust application built and pushed from CI/CD
- Frontend: Next.js application built and pushed from CI/CD
- Nginx: Reverse proxy with SSL termination
Deployment Flow:
- Production Runner: Runs on Production Linode with
production
label - Image Pull: Pulls latest images from Docker Registry on CI Linode
- Stack Deployment: Uses
docker-compose.prod.yml
to deploy complete stack - Health Verification: Ensures all services are healthy before completion
Key Benefits:
- 🔄 Image Registry: Centralized image storage in Docker Registry
- 📦 Consistent Deployment: Same images tested in CI are deployed to production
- ⚡ Fast Deployment: Only pulls changed images
- 🛡️ Rollback Capability: Can easily rollback to previous image versions
- 📊 Health Monitoring: Built-in health checks for all services
7.5 Monitoring Script
Important: The repository includes a pre-configured monitoring script in the scripts/
directory that can be used for both CI/CD and production monitoring.
Repository Script:
scripts/monitor.sh
- Comprehensive monitoring script with support for both CI/CD and production environments
To use the repository monitoring script:
# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME
# Make the script executable
chmod +x scripts/monitor.sh
# Test CI/CD monitoring
./scripts/monitor.sh --type ci-cd
# Test production monitoring (if you have a production setup)
./scripts/monitor.sh --type production
Note: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
Step 8: Configure Firewall
8.1 Configure UFW Firewall
sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 443/tcp # Docker Registry via Caddy (public read access)
Security Model:
- Port 443 (Docker Registry): Public read access, authenticated write access
- SSH: Restricted to your IP addresses
- All other ports: Blocked
Step 9: Test CI/CD Setup
9.1 Test Docker Installation
docker --version
docker compose version
9.2 Check Harbor Status
cd /opt/harbor/harbor
docker compose ps
9.3 Test Harbor Access
# Test Harbor API
curl -k https://localhost/api/v2.0/health
# Test Harbor UI
curl -k -I https://localhost
Part 2: Production Linode Setup
Step 10: Initial System Setup
10.1 Update the System
sudo apt update && sudo apt upgrade -y
10.2 Configure Timezone
# Configure timezone interactively
sudo dpkg-reconfigure tzdata
# Verify timezone setting
date
What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
Expected output: After selecting your timezone, the date
command should show the current date and time in your selected timezone.
10.3 Configure /etc/hosts
# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
# Verify the configuration
cat /etc/hosts
What this does:
- Adds localhost entries for both IPv4 and IPv6 addresses to
/etc/hosts
- Ensures proper localhost resolution for both IPv4 and IPv6
Important: Replace YOUR_PRODUCTION_IPV4_ADDRESS
and YOUR_PRODUCTION_IPV6_ADDRESS
with the actual IPv4 and IPv6 addresses of your Production Linode obtained from your Linode dashboard.
Expected output: The /etc/hosts
file should show entries for 127.0.0.1
, ::1
, and your Linode's actual IP addresses all mapping to localhost
.
10.4 Install Essential Packages
sudo apt install -y \
curl \
wget \
git \
ca-certificates \
apt-transport-https \
software-properties-common \
ufw \
fail2ban \
htop \
nginx \
certbot \
python3-certbot-nginx
10.5 Secure SSH Configuration
Critical Security Step: After setting up SSH key authentication, you must disable password authentication and root login to secure your Production server.
Step 1: Edit SSH Configuration File
# Open the SSH configuration file using nano
sudo nano /etc/ssh/sshd_config
Step 2: Disallow Root Logins
Find the line that says:
#PermitRootLogin prohibit-password
Change it to:
PermitRootLogin no
Step 3: Disable Password Authentication
Find the line that says:
#PasswordAuthentication yes
Change it to:
PasswordAuthentication no
Step 4: Configure Protocol Family (Optional)
If you only need IPv4 connections, find or add:
#AddressFamily any
Change it to:
AddressFamily inet
Step 5: Save and Exit
- Press
Ctrl + X
to exit - Press
Y
to confirm saving - Press
Enter
to confirm the filename
Step 6: Test SSH Configuration
# Test the SSH configuration for syntax errors
sudo sshd -t
Step 7: Restart SSH Service
For Ubuntu 22.10+ (socket-based activation):
sudo systemctl enable --now ssh.service
For other distributions:
sudo systemctl restart sshd
Step 8: Verify SSH Access
IMPORTANT: Test SSH access from a new terminal window before closing your current session:
# Test Production Linode
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "SSH configuration test successful"'
What these changes do:
PermitRootLogin no
: Completely disables root SSH accessPasswordAuthentication no
: Disables password-based authenticationAddressFamily inet
: Listens only on IPv4 (optional, for additional security)
Security Benefits:
- No root access: Eliminates the most common attack vector
- Key-only authentication: Prevents brute force password attacks
- Protocol restriction: Limits SSH to IPv4 only (if configured)
Emergency Access:
If you lose SSH access, you can still access the server through:
- Linode Console: Use the Linode dashboard's console access
- Emergency mode: Boot into single-user mode if needed
Verification Commands:
# Check SSH configuration
sudo grep -E "(PermitRootLogin|PasswordAuthentication|AddressFamily)" /etc/ssh/sshd_config
# Check SSH service status
sudo systemctl status ssh
# Check SSH logs for any issues
sudo journalctl -u ssh -f
# Test SSH access from a new session
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'whoami'
Expected Output:
PermitRootLogin no
PasswordAuthentication no
AddressFamily inet
(if configured)- SSH service should be "active (running)"
- Test commands should return the deployment user name
Important Security Notes:
- Test before closing: Always test SSH access from a new session before closing your current SSH connection
- Keep backup: You can restore the original configuration if needed
- Monitor logs: Check
/var/log/auth.log
for SSH activity and potential attacks - Regular updates: Keep SSH and system packages updated for security patches
Alternative: Manual Configuration with Backup
If you prefer to manually edit the file with a backup:
# Create backup
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup
# Edit the file
sudo nano /etc/ssh/sshd_config
# Test configuration
sudo sshd -t
# Restart service
sudo systemctl restart ssh
Step 11: Create Users
11.1 Create the PROD_SERVICE_USER User
# Create dedicated group for the production service account
sudo groupadd -r PROD_SERVICE_USER
# Create production service account user with dedicated group
sudo useradd -r -g PROD_SERVICE_USER -s /bin/bash -m -d /home/PROD_SERVICE_USER PROD_SERVICE_USER
echo "PROD_SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
11.2 Verify Users
sudo su - PROD_SERVICE_USER
whoami
pwd
exit
sudo su - PROD_DEPLOY_USER
whoami
pwd
exit
Step 12: Install Docker
12.1 Add Docker Repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
12.2 Install Docker Packages
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
12.3 Configure Docker for Production Service Account
sudo usermod -aG docker PROD_SERVICE_USER
12.4 Create Application Directory
# Create application directory for deployment
sudo mkdir -p /opt/APP_NAME
sudo chown PROD_SERVICE_USER:PROD_SERVICE_USER /opt/APP_NAME
sudo chmod 755 /opt/APP_NAME
# Verify the directory was created correctly
ls -la /opt/APP_NAME
What this does:
- Creates the application directory that will be used for deployment
- Sets proper ownership for the PROD_SERVICE_USER
- Ensures the directory exists before the CI workflow runs
Step 13: Configure Docker for Docker Registry Access
Important: The Production Linode needs to be able to pull Docker images from the Docker Registry on the CI/CD Linode. Since we're using Caddy with automatic HTTPS, no additional certificate configuration is needed.
# Change to the PROD_SERVICE_USER
sudo su - PROD_SERVICE_USER
# Test that Docker can pull images from the Docker Registry
docker pull YOUR_CI_CD_IP/APP_NAME/test:latest
# If the pull succeeds, the Docker Registry is accessible
# Change back to PROD_DEPLOY_USER
exit
Important: Replace YOUR_CI_CD_IP
with your actual CI/CD Linode IP address.
What this does:
- Tests Docker Registry access: Verifies that Docker can successfully pull images from the Docker Registry
- No certificate configuration needed: Caddy handles HTTPS automatically
- Simple setup: No complex certificate management required
Step 14: Set Up Forgejo Runner for Production Deployment
Important: The Production Linode needs a Forgejo runner to execute the deployment job from the CI/CD workflow. This runner will pull images from Docker Registry and deploy using docker-compose.prod.yml
.
14.1 Download Runner
Important: Run this step as the PROD_DEPLOY_USER (not root or PROD_SERVICE_USER). The PROD_DEPLOY_USER handles deployment tasks including downloading and installing the Forgejo runner.
cd ~
# Get the latest version dynamically
LATEST_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases | jq -r '.[0].tag_name')
echo "Downloading Forgejo runner version: $LATEST_VERSION"
# Download the latest runner
wget https://code.forgejo.org/forgejo/runner/releases/download/${LATEST_VERSION}/forgejo-runner-${LATEST_VERSION#v}-linux-amd64
chmod +x forgejo-runner-${LATEST_VERSION#v}-linux-amd64
sudo mv forgejo-runner-${LATEST_VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
Alternative: Pin to Specific Version (Recommended for Production)
If you prefer to pin to a specific version for stability, replace the dynamic download with:
cd ~
VERSION="v6.3.1" # Pin to specific version
wget https://code.forgejo.org/forgejo/runner/releases/download/${VERSION}/forgejo-runner-${VERSION#v}-linux-amd64
chmod +x forgejo-runner-${VERSION#v}-linux-amd64
sudo mv forgejo-runner-${VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
What this does:
- Dynamic approach: Downloads the latest stable Forgejo Actions runner
- Version pinning: Allows you to specify a known-good version for production
- System installation: Installs the binary system-wide in
/usr/bin/
for proper Linux structure - Makes the binary executable and available system-wide
Production Recommendation: Use version pinning in production environments to ensure consistency and avoid unexpected breaking changes.
14.2 Get Registration Token
- Go to your Forgejo repository
- Navigate to Settings → Actions → Runners
- Click "New runner"
- Copy the registration token
14.3 Register the Production Runner
Step 1: Register the Runner
# Switch to PROD_DEPLOY_USER to register the runner
sudo su - PROD_DEPLOY_USER
cd ~
# Register the runner with your Forgejo instance
forgejo-runner register \
--instance https://your-forgejo-instance \
--token YOUR_REGISTRATION_TOKEN \
--name "prod-runner" \
--labels "prod" \
--no-interactive
Important: Replace your-forgejo-instance
with your actual Forgejo instance URL and YOUR_REGISTRATION_TOKEN
with the token you copied from Step 14.2. Also make sure it ends in a /
.
Note: The your-forgejo-instance
should be the base URL of your Forgejo instance (e.g., https://git.<your-domain>/
), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.
What this does:
- Creates the required
.runner
configuration file in the PROD_DEPLOY_USER's home directory - Registers the runner with your Forgejo instance
- Sets up the runner with appropriate labels for production deployment
Step 2: Set Up System Configuration
# Create system config directory for Forgejo runner
sudo mkdir -p /etc/forgejo-runner
# Copy the runner configuration to system location
sudo mv /home/PROD_DEPLOY_USER/.runner /etc/forgejo-runner/.runner
# Set proper ownership and permissions
sudo chown PROD_SERVICE_USER:PROD_SERVICE_USER /etc/forgejo-runner/.runner
sudo chmod 600 /etc/forgejo-runner/.runner
What this does:
- Copies the configuration to the system location (
/etc/forgejo-runner/.runner
) - Sets proper ownership and permissions for PROD_SERVICE_USER to access the config
- Registers the runner with your Forgejo instance
- Sets up the runner with appropriate labels for production deployment
14.4 Create Systemd Service
# Create systemd service file
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
[Unit]
Description=Forgejo Actions Runner (Production)
After=network.target docker.service
[Service]
Type=simple
User=PROD_SERVICE_USER
WorkingDirectory=/etc/forgejo-runner
ExecStart=/usr/bin/forgejo-runner daemon
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
# Enable and start the service
sudo systemctl daemon-reload
sudo systemctl enable forgejo-runner.service
sudo systemctl start forgejo-runner.service
# Verify the service is running
sudo systemctl status forgejo-runner.service
14.5 Test Runner Configuration
# Check if the runner is running
sudo systemctl status forgejo-runner.service
# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager
# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "prod-runner" with status "Online"
Expected Output:
systemctl status
should show "active (running)"- Forgejo web interface should show the runner as online with "prod" label
Important: The CI/CD workflow (.forgejo/workflows/ci.yml
) is already configured to use this production runner. The deploy job runs on runs-on: [self-hosted, prod]
, which means it will execute on any runner with the "prod" label.
Architecture:
- Runner Configuration: Located in
/etc/forgejo-runner/.runner
(system configuration) - Application Deployment: Located in
/opt/APP_NAME/
(application software) - Workflow Process: Runner starts in
/etc/forgejo-runner
, then checks out directly to/opt/APP_NAME
When the workflow runs, it will:
- Pull the latest Docker images from Docker Registry
- Use the
docker-compose.prod.yml
file to deploy the application stack - Create the necessary environment variables for production deployment
- Verify that all services are healthy after deployment
The production runner will automatically handle the deployment process when you push to the main branch.
14.6 Understanding the Production Docker Compose Setup
The docker-compose.prod.yml
file is specifically designed for production deployment and differs from development setups:
Key Features:
- Image-based deployment: Uses pre-built images from Docker Registry instead of building from source
- Production networking: All services communicate through a dedicated
sharenet-network
- Health checks: Each service includes health checks to ensure proper startup order
- Nginx reverse proxy: Includes Nginx for SSL termination, load balancing, and security headers
- Persistent storage: PostgreSQL data is stored in a named volume for persistence
- Environment variables: Uses environment variables for configuration (set by the CI/CD workflow)
Service Architecture:
- PostgreSQL: Database with health checks and persistent storage
- Backend: Rust API service that waits for PostgreSQL to be healthy
- Frontend: Next.js application that waits for backend to be healthy
- Nginx: Reverse proxy that serves the frontend and proxies API requests to backend
Deployment Process:
- The production runner pulls the latest images from Docker Registry
- Creates environment variables for the deployment
- Runs
docker compose -f docker-compose.prod.yml up -d
- Waits for all services to be healthy
- Verifies the deployment was successful
Step 15: Configure Security
15.1 Configure Firewall
sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
Security Note: We only allow ports 80 and 443 for external access. The application services (backend on 3001, frontend on 3000) are only accessible through the Nginx reverse proxy, which provides better security and SSL termination.
15.2 Configure Fail2ban
Fail2ban is an intrusion prevention system that monitors logs and automatically blocks IP addresses showing malicious behavior.
# Install fail2ban (if not already installed)
sudo apt install -y fail2ban
# Create a custom jail configuration
sudo tee /etc/fail2ban/jail.local > /dev/null << 'EOF'
[DEFAULT]
# Ban time in seconds (24 hours)
bantime = 86400
# Find time in seconds (10 minutes)
findtime = 600
# Max retries before ban
maxretry = 3
# Ban action (use ufw since we're using ufw firewall)
banaction = ufw
# Log level
loglevel = INFO
# Log target
logtarget = /var/log/fail2ban.log
# SSH protection
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
# Note: Nginx protection is handled by the firewall and application-level security
# Docker containers are isolated, and Nginx logs are not directly accessible to fail2ban
# Web attack protection is provided by:
# 1. UFW firewall (ports 80/443 only)
# 2. Nginx security headers and rate limiting
# 3. Application-level input validation
EOF
# Enable and start fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
# Verify fail2ban is running
sudo systemctl status fail2ban
# Check current jails
sudo fail2ban-client status
What this does:
- SSH Protection: Blocks IPs that fail SSH login 3 times in 10 minutes
- 24-hour bans: Banned IPs are blocked for 24 hours
- Automatic monitoring: Continuously watches SSH logs
Web Security Note: Since Nginx runs in a Docker container, web attack protection is handled by:
- UFW Firewall: Only allows ports 80/443 (no direct access to app services)
- Nginx Security: Built-in rate limiting and security headers
- Application Security: Input validation in the backend/frontend code
Monitoring Fail2ban:
# Check banned IPs
sudo fail2ban-client status sshd
# Unban an IP if needed
sudo fail2ban-client set sshd unbanip IP_ADDRESS
# View fail2ban logs
sudo tail -f /var/log/fail2ban.log
# Check all active jails
sudo fail2ban-client status
**Why This Matters for Production**:
- **Your server is exposed**: The Production Linode is accessible from the internet
- **Automated attacks**: Bots constantly scan for vulnerable servers
- **Resource protection**: Prevents attackers from consuming CPU/memory
- **Security layers**: Works with the firewall to provide defense in depth
### Step 16: Test Production Setup
#### 16.1 Test Docker Installation
```bash
docker --version
docker compose --version
16.2 Test Harbor Access
# Test pulling an image from the CI/CD Docker Registry
docker pull YOUR_CI_CD_IP/APP_NAME/test:latest
Important: Replace YOUR_CI_CD_IP
with your actual CI/CD Linode IP address.
Note: Application deployment testing will be done in Step 20 after the complete CI/CD pipeline is set up.
Part 3: Final Configuration and Testing
Step 17: Configure Forgejo Repository Secrets
Go to your Forgejo repository and add these secrets in Settings → Secrets and Variables → Actions:
Required Secrets:
CI_HOST
: Your CI/CD Linode IP address (used for Docker Registry access)PRODUCTION_IP
: Your Production Linode IP addressPROD_DEPLOY_USER
: The production deployment user name (e.g.,prod-deploy
)PROD_SERVICE_USER
: The production service user name (e.g.,prod-service
)APP_NAME
: Your application name (e.g.,sharenet
)POSTGRES_PASSWORD
: A strong password for the PostgreSQL databaseREGISTRY_USER
: Docker Registry username for CI operations (e.g.,registry-user
)REGISTRY_PASSWORD
: Docker Registry password for CI operations (the password you set in the environment file, default:your-secure-registry-password
)
Optional Secrets (for domain users):
DOMAIN
: Your domain name (e.g.,example.com
)EMAIL
: Your email for SSL certificate notifications
Note: This setup uses custom Dockerfiles for testing environments with base images stored in Docker Registry. The CI pipeline automatically checks if base images exist in Docker Registry and pulls them from Docker Hub only when needed, eliminating rate limiting issues and providing better control over the testing environment.
Step 18: Test Complete Pipeline
18.1 Trigger a Test Build
- Make a small change to your repository (e.g., update a comment or add a test file)
- Commit and push the changes to trigger the CI/CD pipeline
- Monitor the build in your Forgejo repository → Actions tab
18.2 Verify Pipeline Steps
The pipeline should execute these steps in order:
- Checkout: Clone the repository
- Setup DinD: Configure Docker-in-Docker environment
- Test Backend: Run backend tests in isolated environment
- Test Frontend: Run frontend tests in isolated environment
- Build Backend: Build backend Docker image in DinD
- Build Frontend: Build frontend Docker image in DinD
- Push to Registry: Push images to Docker Registry from DinD
- Deploy to Production: Deploy to production server
18.3 Check Docker Registry
# On CI/CD Linode
cd /opt/APP_NAME
# Check if new images were pushed (using correct registry port 443)
curl -k https://localhost:443/v2/_catalog
# Check specific repository tags
curl -k https://localhost:443/v2/APP_NAME/backend/tags/list
curl -k https://localhost:443/v2/APP_NAME/frontend/tags/list
# Alternative: Check registry via Caddy
# Open https://YOUR_CI_CD_IP in your browser
18.4 Verify Production Deployment
# On Production Linode
cd /opt/APP_NAME
# Check if containers are running with new images
docker compose -f docker-compose.prod.yml ps
# Check application health
curl http://localhost:3000
curl http://localhost:3001/health
# Check container logs for any errors
docker compose -f docker-compose.prod.yml logs backend
docker compose -f docker-compose.prod.yml logs frontend
18.5 Test Application Functionality
- Frontend: Visit your production URL (IP or domain)
- Backend API: Test API endpoints
- Database: Verify database connections
- Logs: Check for any errors in application logs
Step 19: Set Up SSL/TLS (Optional - Domain Users)
19.1 Install SSL Certificate
If you have a domain pointing to your Production Linode:
# On Production Linode
sudo certbot --nginx -d your-domain.com
# Verify certificate
sudo certbot certificates
19.2 Configure Auto-Renewal
# Test auto-renewal
sudo certbot renew --dry-run
# Add to crontab for automatic renewal
sudo crontab -e
# Add this line:
# 0 12 * * * /usr/bin/certbot renew --quiet
Step 20: Final Verification
20.1 Security Check
# Check firewall status
sudo ufw status
# Check fail2ban status
sudo systemctl status fail2ban
# Check SSH access (should be key-based only)
sudo grep "PasswordAuthentication" /etc/ssh/sshd_config
20.2 Performance Check
# Check system resources
htop
# Check disk usage
df -h
# Check Docker disk usage
docker system df
20.3 Backup Verification
# Test backup script
cd /opt/APP_NAME
./scripts/backup.sh --dry-run
# Run actual backup
./scripts/backup.sh
Step 21: Documentation and Maintenance
21.1 Update Documentation
- Update README.md with deployment information
- Document environment variables and their purposes
- Create troubleshooting guide for common issues
- Document backup and restore procedures
21.2 Set Up Monitoring Alerts
# Set up monitoring cron job
(crontab -l 2>/dev/null; echo "*/5 * * * * cd /opt/APP_NAME && ./scripts/monitor.sh --type production >> /tmp/monitor.log 2>&1") | crontab -
# Check monitoring logs
tail -f /tmp/monitor.log
21.3 Regular Maintenance Tasks
Daily:
- Check application logs for errors
- Monitor system resources
- Verify backup completion
Weekly:
- Review security logs
- Update system packages
- Test backup restoration
Monthly:
- Review and rotate logs
- Update SSL certificates
- Review and update documentation
🎉 Congratulations!
You have successfully set up a complete CI/CD pipeline with:
- ✅ Automated testing on every code push in isolated DinD environment
- ✅ Docker image building and Docker Registry storage
- ✅ Automated deployment to production
- ✅ Health monitoring and logging
- ✅ Backup and cleanup automation
- ✅ Security hardening with proper user separation
- ✅ SSL/TLS support for production (optional)
- ✅ Zero resource contention between CI/CD and Harbor
Your application is now ready for continuous deployment with proper security, monitoring, and maintenance procedures in place!
Step 8.6 CI/CD Workflow Summary Table
Stage | What Runs | How/Where |
---|---|---|
Test | All integration/unit tests | docker-compose.test.yml |
Build | Build & push images | Direct Docker commands |
Deploy | Deploy to production | docker-compose.prod.yml |
How it works:
- Test: The workflow spins up a full test environment using
docker-compose.test.yml
(Postgres, backend, frontend, etc.) and runs all tests inside containers. - Build: If tests pass, the workflow uses direct Docker commands (no compose file) to build backend and frontend images and push them to Harbor.
- Deploy: The production runner pulls images from Harbor and deploys the stack using
docker-compose.prod.yml
.
Expected Output:
- Each stage runs in its own isolated environment.
- Test failures stop the pipeline before any images are built or deployed.
- Only tested images are deployed to production.
Manual Testing with docker-compose.test.yml
You can use the same test environment locally that the CI pipeline uses for integration testing. This is useful for debugging, development, or verifying your setup before pushing changes.
Note: Since the CI pipeline runs tests inside a DinD container, local testing requires a similar setup.
Start the Test Environment (Local Development)
For local development testing, you can run the test environment directly:
# Start the test environment locally
docker compose -f docker-compose.test.yml up -d
# Check service health
docker compose -f docker-compose.test.yml ps
Important: This local setup is for development only. The CI pipeline uses a more isolated DinD environment.
Run Tests Manually
You can now exec into the containers to run tests or commands as needed. For example:
# Run backend tests
docker exec ci-cd-test-rust cargo test --all
# Run frontend tests
docker exec ci-cd-test-node npm run test
Cleanup
When you're done, stop and remove all test containers:
docker compose -f docker-compose.test.yml down
Tip: The CI pipeline uses the same test containers but runs them inside a DinD environment for complete isolation.