Some checks are pending
CI/CD Pipeline (Fully Isolated DinD) / Test Backend and Frontend (Fully Isolated DinD) (push) Waiting to run
CI/CD Pipeline (Fully Isolated DinD) / Build and Push Docker Images (DinD) (push) Blocked by required conditions
CI/CD Pipeline (Fully Isolated DinD) / Deploy to Production (push) Blocked by required conditions
2107 lines
No EOL
67 KiB
Markdown
2107 lines
No EOL
67 KiB
Markdown
# CI/CD Pipeline Setup Guide
|
|
|
|
This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments using Docker-in-Docker (DinD) for isolated CI operations.
|
|
|
|
## Architecture Overview
|
|
|
|
```
|
|
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
|
│ Forgejo Host │ │ CI/CD Linode │ │ Production Linode│
|
|
│ (Repository) │ │ (Actions Runner)│ │ (Docker Deploy) │
|
|
│ │ │ + Harbor Registry│ │ │
|
|
│ │ │ + DinD Container│ │ │
|
|
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
|
│ │ │
|
|
│ │ │
|
|
└─────────── Push ──────┼───────────────────────┘
|
|
│
|
|
└─── Deploy ────────────┘
|
|
```
|
|
|
|
## Pipeline Flow
|
|
|
|
1. **Code Push**: Developer pushes code to Forgejo repository
|
|
2. **Automated Testing**: CI/CD Linode runs tests in isolated DinD environment
|
|
3. **Image Building**: If tests pass, Docker images are built within DinD
|
|
4. **Registry Push**: Images are pushed to Harbor registry from DinD
|
|
5. **Production Deployment**: Production Linode pulls images and deploys
|
|
6. **Health Check**: Application is verified and accessible
|
|
|
|
## Key Benefits of DinD Approach
|
|
|
|
### **For Rust Testing:**
|
|
- ✅ **Fresh environment** every test run
|
|
- ✅ **Parallel execution** capability
|
|
- ✅ **Isolated dependencies** - no test pollution
|
|
- ✅ **Fast cleanup** - just restart DinD container
|
|
|
|
### **For CI/CD Operations:**
|
|
- ✅ **Zero resource contention** with Harbor
|
|
- ✅ **Simple cleanup** - one-line container restart
|
|
- ✅ **Perfect isolation** - CI/CD can't affect Harbor
|
|
- ✅ **Consistent environment** - same setup every time
|
|
|
|
### **For Maintenance:**
|
|
- ✅ **Reduced complexity** - no complex cleanup scripts
|
|
- ✅ **Easy debugging** - isolated environment
|
|
- ✅ **Reliable operations** - no interference between services
|
|
|
|
## Prerequisites
|
|
|
|
- Two Ubuntu 24.04 LTS Linodes with root access
|
|
- Basic familiarity with Linux commands and SSH
|
|
- Forgejo repository with Actions enabled
|
|
- **Optional**: Domain name for Production Linode (for SSL/TLS)
|
|
|
|
## Quick Start
|
|
|
|
1. **Set up CI/CD Linode** (Steps 1-14)
|
|
2. **Set up Production Linode** (Steps 15-27)
|
|
3. **Configure SSH key exchange** (Step 28)
|
|
4. **Set up Forgejo repository secrets** (Step 29)
|
|
5. **Test the complete pipeline** (Step 30)
|
|
|
|
## What's Included
|
|
|
|
### CI/CD Linode Features
|
|
- Forgejo Actions runner for automated builds
|
|
- **Docker-in-Docker (DinD) container** for isolated CI operations
|
|
- Harbor container registry for image storage
|
|
- Harbor web UI for image management
|
|
- Built-in vulnerability scanning with Trivy
|
|
- Role-based access control and audit logs
|
|
- Secure SSH communication with production
|
|
- **Simplified cleanup** - just restart DinD container
|
|
|
|
### Production Linode Features
|
|
- Docker-based application deployment
|
|
- **Optional SSL/TLS certificate management** (if domain is provided)
|
|
- Nginx reverse proxy with security headers
|
|
- Automated backups and monitoring
|
|
- Firewall and fail2ban protection
|
|
|
|
### Pipeline Features
|
|
- **Automated testing** on every code push in isolated environment
|
|
- **Automated image building** and registry push from DinD
|
|
- **Automated deployment** to production
|
|
- **Rollback capability** with image versioning
|
|
- **Health monitoring** and logging
|
|
- **Zero resource contention** between CI/CD and Harbor
|
|
|
|
## Security Model and User Separation
|
|
|
|
This setup uses a **principle of least privilege** approach with separate users for different purposes:
|
|
|
|
### User Roles
|
|
|
|
1. **Root User**
|
|
- **Purpose**: Initial system setup only
|
|
- **SSH Access**: Disabled after setup
|
|
- **Privileges**: Full system access (used only during initial configuration)
|
|
|
|
2. **Deployment User (`DEPLOY_USER`)**
|
|
- **Purpose**: SSH access, deployment tasks, system administration
|
|
- **SSH Access**: Enabled with key-based authentication
|
|
- **Privileges**: Sudo access for deployment and administrative tasks
|
|
- **Examples**: `deploy`, `ci`, `admin`
|
|
|
|
3. **Service Account (`SERVICE_USER`)**
|
|
- **Purpose**: Running application services (Docker containers, databases)
|
|
- **SSH Access**: None (no login shell)
|
|
- **Privileges**: No sudo access, minimal system access
|
|
- **Examples**: `appuser`, `service`, `app`
|
|
|
|
### Security Benefits
|
|
|
|
- **No root SSH access**: Eliminates the most common attack vector
|
|
- **Principle of least privilege**: Each user has only the access they need
|
|
- **Separation of concerns**: Deployment tasks vs. service execution are separate
|
|
- **Audit trail**: Clear distinction between deployment and service activities
|
|
- **Reduced attack surface**: Service account has minimal privileges
|
|
|
|
### File Permissions
|
|
|
|
- **Application files**: Owned by `SERVICE_USER` for security
|
|
- **Docker operations**: Run by `DEPLOY_USER` with sudo (deployment only)
|
|
- **Service execution**: Run by `SERVICE_USER` (no sudo needed)
|
|
|
|
---
|
|
|
|
## Prerequisites and Initial Setup
|
|
|
|
### What's Already Done (Assumptions)
|
|
|
|
This guide assumes you have already:
|
|
|
|
1. **Created two Ubuntu 24.04 LTS Linodes** with root access
|
|
2. **Set root passwords** for both Linodes
|
|
3. **Have SSH client** installed on your local machine
|
|
4. **Have Forgejo repository** with Actions enabled
|
|
5. **Optional**: Domain name pointing to Production Linode's IP addresses
|
|
|
|
### Step 0: Initial SSH Access and Verification
|
|
|
|
Before proceeding with the setup, you need to establish initial SSH access to both Linodes.
|
|
|
|
#### 0.1 Get Your Linode IP Addresses
|
|
|
|
From your Linode dashboard, note the IP addresses for:
|
|
- **CI/CD Linode**: `YOUR_CI_CD_IP` (IP address only, no domain needed)
|
|
- **Production Linode**: `YOUR_PRODUCTION_IP` (IP address for SSH, domain for web access)
|
|
|
|
#### 0.2 Test Initial SSH Access
|
|
|
|
Test SSH access to both Linodes:
|
|
|
|
```bash
|
|
# Test CI/CD Linode (IP address only)
|
|
ssh root@YOUR_CI_CD_IP
|
|
|
|
# Test Production Linode (IP address only)
|
|
ssh root@YOUR_PRODUCTION_IP
|
|
```
|
|
|
|
**Expected output**: SSH login prompt asking for root password.
|
|
|
|
**If something goes wrong**:
|
|
- Verify the IP addresses are correct
|
|
- Check that SSH is enabled on the Linodes
|
|
- Ensure your local machine can reach the Linodes (no firewall blocking)
|
|
|
|
#### 0.3 Choose Your Names
|
|
|
|
Before proceeding, decide on:
|
|
|
|
1. **Service Account Name**: Choose a username for the service account (e.g., `appuser`, `deploy`, `service`)
|
|
- Replace `SERVICE_USER` in this guide with your chosen name
|
|
- This account runs the actual application services
|
|
|
|
2. **Deployment User Name**: Choose a username for deployment tasks (e.g., `deploy`, `ci`, `admin`)
|
|
- Replace `DEPLOY_USER` in this guide with your chosen name
|
|
- This account has sudo privileges for deployment tasks
|
|
|
|
3. **Application Name**: Choose a name for your application (e.g., `myapp`, `webapp`, `api`)
|
|
- Replace `APP_NAME` in this guide with your chosen name
|
|
|
|
4. **Domain Name** (Optional): If you have a domain, note it for SSL configuration
|
|
- Replace `your-domain.com` in this guide with your actual domain
|
|
|
|
**Example**:
|
|
- If you choose `appuser` as service account, `deploy` as deployment user, and `myapp` as application name:
|
|
- Replace all `SERVICE_USER` with `appuser`
|
|
- Replace all `DEPLOY_USER` with `deploy`
|
|
- Replace all `APP_NAME` with `myapp`
|
|
- If you have a domain `example.com`, replace `your-domain.com` with `example.com`
|
|
|
|
**Security Model**:
|
|
- **Service Account (`SERVICE_USER`)**: Runs application services, no sudo access
|
|
- **Deployment User (`DEPLOY_USER`)**: Handles deployments via SSH, has sudo access
|
|
- **Root**: Only used for initial setup, then disabled for SSH access
|
|
|
|
#### 0.4 Set Up SSH Key Authentication for Local Development
|
|
|
|
**Important**: This step should be done on both Linodes to enable secure SSH access from your local development machine.
|
|
|
|
##### 0.4.1 Generate SSH Key on Your Local Machine
|
|
|
|
On your local development machine, generate an SSH key pair:
|
|
|
|
```bash
|
|
# Generate SSH key pair (if you don't already have one)
|
|
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""
|
|
|
|
# Or use existing key if you have one
|
|
ls ~/.ssh/id_ed25519.pub
|
|
```
|
|
|
|
##### 0.4.2 Add Your Public Key to Both Linodes
|
|
|
|
Copy your public key to both Linodes:
|
|
|
|
```bash
|
|
# Copy your public key to CI/CD Linode
|
|
ssh-copy-id root@YOUR_CI_CD_IP
|
|
|
|
# Copy your public key to Production Linode
|
|
ssh-copy-id root@YOUR_PRODUCTION_IP
|
|
```
|
|
|
|
**Alternative method** (if ssh-copy-id doesn't work):
|
|
```bash
|
|
# Copy your public key content
|
|
cat ~/.ssh/id_ed25519.pub
|
|
|
|
# Then manually add to each server
|
|
ssh root@YOUR_CI_CD_IP
|
|
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
|
|
|
|
ssh root@YOUR_PRODUCTION_IP
|
|
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
|
|
```
|
|
|
|
##### 0.4.3 Test SSH Key Authentication
|
|
|
|
Test that you can access both servers without passwords:
|
|
|
|
```bash
|
|
# Test CI/CD Linode
|
|
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'
|
|
|
|
# Test Production Linode
|
|
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'
|
|
```
|
|
|
|
**Expected output**: The echo messages should appear without password prompts.
|
|
|
|
##### 0.4.4 Create Deployment Users
|
|
|
|
On both Linodes, create the deployment user with sudo privileges:
|
|
|
|
```bash
|
|
# Create deployment user
|
|
sudo useradd -m -s /bin/bash DEPLOY_USER
|
|
sudo usermod -aG sudo DEPLOY_USER
|
|
|
|
# Set a secure password (for emergency access only)
|
|
echo "DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
|
|
# Copy your SSH key to the deployment user
|
|
sudo mkdir -p /home/DEPLOY_USER/.ssh
|
|
sudo cp ~/.ssh/authorized_keys /home/DEPLOY_USER/.ssh/
|
|
sudo chown -R DEPLOY_USER:DEPLOY_USER /home/DEPLOY_USER/.ssh
|
|
sudo chmod 700 /home/DEPLOY_USER/.ssh
|
|
sudo chmod 600 /home/DEPLOY_USER/.ssh/authorized_keys
|
|
|
|
# Configure sudo to use SSH key authentication (most secure)
|
|
echo "DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/DEPLOY_USER
|
|
sudo chmod 440 /etc/sudoers.d/DEPLOY_USER
|
|
```
|
|
|
|
**Security Note**: This configuration allows the DEPLOY_USER to use sudo without a password, which is more secure for CI/CD automation since there are no passwords to store or expose. The random password is set for emergency console access only.
|
|
|
|
##### 0.4.5 Test Sudo Access
|
|
|
|
Test that the deployment user can use sudo without password prompts:
|
|
|
|
```bash
|
|
# Test sudo access
|
|
ssh DEPLOY_USER@YOUR_CI_CD_IP 'sudo whoami'
|
|
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'sudo whoami'
|
|
```
|
|
|
|
**Expected output**: Both commands should return `root` without prompting for a password.
|
|
|
|
##### 0.4.6 Test Deployment User Access
|
|
|
|
Test that you can access both servers as the deployment user:
|
|
|
|
```bash
|
|
# Test CI/CD Linode
|
|
ssh DEPLOY_USER@YOUR_CI_CD_IP 'echo "Deployment user SSH access works for CI/CD"'
|
|
|
|
# Test Production Linode
|
|
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Deployment user SSH access works for Production"'
|
|
```
|
|
|
|
**Expected output**: The echo messages should appear without password prompts.
|
|
|
|
##### 0.4.7 Create SSH Config for Easy Access
|
|
|
|
On your local machine, create an SSH config file for easy access:
|
|
|
|
```bash
|
|
# Create SSH config
|
|
cat > ~/.ssh/config << 'EOF'
|
|
Host ci-cd-dev
|
|
HostName YOUR_CI_CD_IP
|
|
User DEPLOY_USER
|
|
IdentityFile ~/.ssh/id_ed25519
|
|
StrictHostKeyChecking no
|
|
|
|
Host production-dev
|
|
HostName YOUR_PRODUCTION_IP
|
|
User DEPLOY_USER
|
|
IdentityFile ~/.ssh/id_ed25519
|
|
StrictHostKeyChecking no
|
|
EOF
|
|
|
|
chmod 600 ~/.ssh/config
|
|
```
|
|
|
|
Now you can access servers easily:
|
|
```bash
|
|
ssh ci-cd-dev
|
|
ssh production-dev
|
|
```
|
|
|
|
---
|
|
|
|
## Part 1: CI/CD Linode Setup
|
|
|
|
### Step 1: Initial System Setup
|
|
|
|
#### 1.1 Update the System
|
|
|
|
```bash
|
|
sudo apt update && sudo apt upgrade -y
|
|
```
|
|
|
|
**What this does**: Updates package lists and upgrades all installed packages to their latest versions.
|
|
|
|
**Expected output**: A list of packages being updated, followed by completion messages.
|
|
|
|
#### 1.2 Configure Timezone
|
|
|
|
```bash
|
|
# Configure timezone interactively
|
|
sudo dpkg-reconfigure tzdata
|
|
|
|
# Verify timezone setting
|
|
date
|
|
```
|
|
|
|
**What this does**: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
|
|
|
|
**Expected output**: After selecting your timezone, the `date` command should show the current date and time in your selected timezone.
|
|
|
|
#### 1.3 Configure /etc/hosts
|
|
|
|
```bash
|
|
# Add localhost entries for both IPv4 and IPv6
|
|
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
|
|
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
|
|
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
|
|
# Verify the configuration
|
|
cat /etc/hosts
|
|
```
|
|
|
|
**What this does**:
|
|
- Adds localhost entries for both IPv4 and IPv6 addresses to `/etc/hosts`
|
|
- Ensures proper localhost resolution for both IPv4 and IPv6
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IPV4_ADDRESS` and `YOUR_CI_CD_IPV6_ADDRESS` with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.
|
|
|
|
**Expected output**: The `/etc/hosts` file should show entries for `127.0.0.1`, `::1`, and your Linode's actual IP addresses all mapping to `localhost`.
|
|
|
|
#### 1.4 Install Essential Packages
|
|
|
|
```bash
|
|
sudo apt install -y \
|
|
curl \
|
|
wget \
|
|
git \
|
|
build-essential \
|
|
pkg-config \
|
|
libssl-dev \
|
|
ca-certificates \
|
|
apt-transport-https \
|
|
software-properties-common \
|
|
apache2-utils
|
|
```
|
|
|
|
**What this does**: Installs development tools, SSL libraries, and utilities needed for Docker and application building.
|
|
|
|
### Step 2: Create Users
|
|
|
|
#### 2.1 Create Service Account
|
|
|
|
```bash
|
|
# Create dedicated group for the service account
|
|
sudo groupadd -r SERVICE_USER
|
|
|
|
# Create service account user with dedicated group
|
|
sudo useradd -r -g SERVICE_USER -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
|
|
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
```
|
|
|
|
#### 2.2 Verify Users
|
|
|
|
```bash
|
|
sudo su - SERVICE_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
|
|
sudo su - DEPLOY_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
```
|
|
|
|
### Step 3: Clone Repository for Registry Configuration
|
|
|
|
```bash
|
|
# Switch to DEPLOY_USER (who has sudo access)
|
|
sudo su - DEPLOY_USER
|
|
|
|
# Create application directory and clone repository
|
|
sudo mkdir -p /opt/APP_NAME
|
|
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME
|
|
cd /opt
|
|
sudo git clone https://your-forgejo-instance/your-username/APP_NAME.git
|
|
sudo chown -R SERVICE_USER:SERVICE_USER APP_NAME/
|
|
|
|
# Verify the registry folder exists
|
|
ls -la /opt/APP_NAME/registry/
|
|
```
|
|
|
|
**Important**: Replace `your-forgejo-instance`, `your-username`, and `APP_NAME` with your actual Forgejo instance URL, username, and application name.
|
|
|
|
**What this does**:
|
|
- DEPLOY_USER creates the directory structure and clones the repository
|
|
- SERVICE_USER owns all the files for security
|
|
- Registry configuration files are now available at `/opt/APP_NAME/registry/`
|
|
|
|
### Step 4: Install Docker
|
|
|
|
#### 4.1 Add Docker Repository
|
|
|
|
```bash
|
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
sudo apt update
|
|
```
|
|
|
|
#### 4.2 Install Docker Packages
|
|
|
|
```bash
|
|
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
|
|
```
|
|
|
|
#### 4.3 Configure Docker for Service Account
|
|
|
|
```bash
|
|
sudo usermod -aG docker SERVICE_USER
|
|
```
|
|
|
|
### Step 5: Set Up Harbor Container Registry
|
|
|
|
#### 5.1 Generate SSL Certificates
|
|
|
|
```bash
|
|
# Create system SSL directory for Harbor certificates
|
|
sudo mkdir -p /etc/ssl/registry
|
|
|
|
# Get your actual IP address
|
|
YOUR_ACTUAL_IP=$(curl -4 -s ifconfig.me)
|
|
echo "Your IP address is: $YOUR_ACTUAL_IP"
|
|
|
|
# Create OpenSSL configuration file with proper SANs
|
|
sudo tee /etc/ssl/registry/openssl.conf << EOF
|
|
[req]
|
|
distinguished_name = req_distinguished_name
|
|
req_extensions = v3_req
|
|
prompt = no
|
|
|
|
[req_distinguished_name]
|
|
C = US
|
|
ST = State
|
|
L = City
|
|
O = Organization
|
|
CN = $YOUR_ACTUAL_IP
|
|
|
|
[v3_req]
|
|
keyUsage = keyEncipherment, dataEncipherment
|
|
extendedKeyUsage = serverAuth
|
|
subjectAltName = @alt_names
|
|
|
|
[alt_names]
|
|
IP.1 = $YOUR_ACTUAL_IP
|
|
DNS.1 = $YOUR_ACTUAL_IP
|
|
DNS.2 = localhost
|
|
EOF
|
|
|
|
# Generate self-signed certificate with proper SANs
|
|
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/ssl/registry/registry.key -out /etc/ssl/registry/registry.crt -days 365 -nodes -extensions v3_req -config /etc/ssl/registry/openssl.conf
|
|
|
|
# Set proper permissions
|
|
sudo chmod 600 /etc/ssl/registry/registry.key
|
|
sudo chmod 644 /etc/ssl/registry/registry.crt
|
|
sudo chmod 644 /etc/ssl/registry/openssl.conf
|
|
```
|
|
|
|
**Important**: The certificate is now generated with proper Subject Alternative Names (SANs) including your IP address, which is required for TLS certificate validation by Docker and other clients.
|
|
|
|
**Note**: The permissions are set to:
|
|
- `registry.key`: `600` (owner read/write only) - private key must be secure
|
|
- `registry.crt`: `644` (owner read/write, group/others read) - certificate can be read by services
|
|
- `openssl.conf`: `644` (owner read/write, group/others read) - configuration file for reference
|
|
|
|
#### 5.1.1 Configure Docker to Trust Harbor Registry
|
|
|
|
```bash
|
|
# Add the certificate to system CA certificates
|
|
sudo cp /etc/ssl/registry/registry.crt /usr/local/share/ca-certificates/registry.crt
|
|
sudo update-ca-certificates
|
|
|
|
# Configure Docker to trust the Harbor registry
|
|
sudo mkdir -p /etc/docker
|
|
sudo tee /etc/docker/daemon.json << EOF
|
|
{
|
|
"insecure-registries": ["YOUR_CI_CD_IP"],
|
|
"registry-mirrors": []
|
|
}
|
|
EOF
|
|
|
|
# Restart Docker to apply the new configuration
|
|
sudo systemctl restart docker
|
|
```
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address. This configuration tells Docker to trust your Harbor registry and allows Docker login to work properly.
|
|
|
|
#### 5.2 Generate Secure Passwords and Secrets
|
|
|
|
```bash
|
|
# Set environment variables for Harbor
|
|
export HARBOR_HOSTNAME=$YOUR_ACTUAL_IP
|
|
export HARBOR_ADMIN_PASSWORD="Harbor12345"
|
|
|
|
# Generate secure database password for Harbor
|
|
export DB_PASSWORD=$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-25)
|
|
|
|
# Generate secure secrets for Harbor
|
|
export CORE_SECRET=$(openssl rand -hex 16)
|
|
export JOBSERVICE_SECRET=$(openssl rand -hex 16)
|
|
|
|
echo "Generated secrets:"
|
|
echo "DB_PASSWORD: $DB_PASSWORD"
|
|
echo "CORE_SECRET: $CORE_SECRET"
|
|
echo "JOBSERVICE_SECRET: $JOBSERVICE_SECRET"
|
|
|
|
# Save secrets securely for future reference
|
|
cat > /opt/APP_NAME/harbor-secrets.txt << EOF
|
|
# Harbor Secrets - KEEP THESE SECURE!
|
|
# Generated on: $(date)
|
|
# CI/CD IP: $YOUR_ACTUAL_IP
|
|
|
|
HARBOR_HOSTNAME=$HARBOR_HOSTNAME
|
|
HARBOR_ADMIN_PASSWORD=$HARBOR_ADMIN_PASSWORD
|
|
DB_PASSWORD=$DB_PASSWORD
|
|
CORE_SECRET=$CORE_SECRET
|
|
JOBSERVICE_SECRET=$JOBSERVICE_SECRET
|
|
|
|
# IMPORTANT: Store this file securely and keep a backup!
|
|
# You will need these secrets for:
|
|
# - Harbor upgrades
|
|
# - Database troubleshooting
|
|
# - Disaster recovery
|
|
# - Service restoration
|
|
EOF
|
|
|
|
# Set secure permissions on secrets file
|
|
chmod 600 /opt/APP_NAME/harbor-secrets.txt
|
|
echo "Secrets saved to /opt/APP_NAME/harbor-secrets.txt"
|
|
echo "IMPORTANT: Keep this file secure and backed up!"
|
|
```
|
|
|
|
**Important**:
|
|
- Change the default passwords for production use. The default admin password is `Harbor12345` - change this immediately after first login.
|
|
- The generated secrets (`CORE_SECRET` and `JOBSERVICE_SECRET`) are cryptographically secure random values used for encrypting sensitive data.
|
|
- Store these secrets securely as they will be needed for Harbor upgrades or troubleshooting.
|
|
- **CRITICAL**: The secrets file contains sensitive information. Keep it secure and backed up!
|
|
|
|
#### 5.3 Install Harbor Using Official Installer
|
|
|
|
```bash
|
|
# Switch to DEPLOY_USER (who has sudo access)
|
|
sudo su - DEPLOY_USER
|
|
|
|
cd /opt/APP_NAME
|
|
|
|
# Download Harbor 2.10.0 offline installer
|
|
sudo wget https://github.com/goharbor/harbor/releases/download/v2.10.0/harbor-offline-installer-v2.10.0.tgz
|
|
|
|
sudo tar -xzf harbor-offline-installer-v2.10.0.tgz
|
|
|
|
cd harbor
|
|
sudo cp harbor.yml.tmpl harbor.yml
|
|
|
|
# Edit harbor.yml configuration
|
|
sudo nano harbor.yml
|
|
```
|
|
|
|
**Important**: In the `harbor.yml` file, update the following variables:
|
|
- `hostname: YOUR_CI_CD_IP` (replace with your actual IP)
|
|
- `certificate: /etc/ssl/registry/registry.crt`
|
|
- `private_key: /etc/ssl/registry/registry.key`
|
|
- `password: <the DB_PASSWORD generated in Step 5.2>`
|
|
|
|
**Note**: Leave `harbor_admin_password` as `Harbor12345` for now. This will be changed at first login through the UI after launching Harbor.
|
|
|
|
#### 5.4 Prepare and Install Harbor
|
|
|
|
```bash
|
|
# Prepare Harbor configuration
|
|
sudo ./prepare
|
|
|
|
# Install Harbor with Trivy vulnerability scanner
|
|
sudo ./install.sh --with-trivy
|
|
|
|
cd ..
|
|
|
|
# Change harbor folder permissions recursively to SERVICE_USER
|
|
sudo chown -R SERVICE_USER:SERVICE_USER harbor
|
|
|
|
# Switch to SERVICE_USER to run installation again as non-root
|
|
sudo su - SERVICE_USER
|
|
|
|
cd /opt/APP_NAME/harbor
|
|
|
|
# Install Harbor as SERVICE_USER (permissions are partially adjusted correctly)
|
|
./install.sh --with-trivy
|
|
|
|
# Exit SERVICE_USER shell
|
|
exit
|
|
```
|
|
|
|
#### 5.5 Fix Permission Issues
|
|
|
|
```bash
|
|
# Switch back to DEPLOY_USER to adjust the permissions for various env files
|
|
cd /opt/APP_NAME/harbor
|
|
|
|
sudo chown SERVICE_USER:SERVICE_USER common/config/jobservice/env
|
|
sudo chown SERVICE_USER:SERVICE_USER common/config/db/env
|
|
sudo chown SERVICE_USER:SERVICE_USER common/config/registryctl/env
|
|
sudo chown SERVICE_USER:SERVICE_USER common/config/trivy-adapter/env
|
|
sudo chown SERVICE_USER:SERVICE_USER common/config/core/env
|
|
|
|
# Exit DEPLOY_USER shell
|
|
exit
|
|
```
|
|
|
|
#### 5.6 Test Harbor Installation
|
|
|
|
```bash
|
|
# Switch to SERVICE_USER
|
|
sudo su - SERVICE_USER
|
|
|
|
cd /opt/APP_NAME/harbor
|
|
|
|
# Verify you can stop Harbor. All Harbor containers should stop.
|
|
docker compose down
|
|
|
|
# Verify you can bring Harbor back up. All Harbor containers should start back up.
|
|
docker compose up -d
|
|
|
|
# Exit SERVICE_USER shell
|
|
exit
|
|
```
|
|
|
|
**Important**: Harbor startup can take 2-3 minutes as it initializes the database and downloads vulnerability databases. The health check will ensure all services are running properly.
|
|
|
|
#### 5.7 Wait for Harbor Startup
|
|
|
|
```bash
|
|
# Monitor Harbor startup progress
|
|
cd /opt/APP_NAME/harbor
|
|
docker compose logs -f
|
|
```
|
|
|
|
**Expected output**: You should see logs from all Harbor services (core, database, redis, registry, portal, nginx, jobservice, trivy) starting up. Wait until you see "Harbor has been installed and started successfully" or similar success messages.
|
|
|
|
#### 5.8 Test Harbor Setup
|
|
|
|
```bash
|
|
# Check if all Harbor containers are running
|
|
cd /opt/APP_NAME/harbor
|
|
docker compose ps
|
|
|
|
# Test Harbor API (HTTPS)
|
|
curl -k https://localhost/api/v2.0/health
|
|
|
|
# Test Harbor UI (HTTPS)
|
|
curl -k -I https://localhost
|
|
|
|
# Expected output: HTTP/1.1 200 OK
|
|
```
|
|
|
|
**Important**: All Harbor services should show as "Up" in the `docker compose ps` output. The health check should return a JSON response indicating all services are healthy.
|
|
|
|
#### 5.9 Access Harbor Web UI
|
|
|
|
1. **Open your browser** and navigate to: `https://YOUR_CI_CD_IP`
|
|
2. **Login with default credentials**:
|
|
- Username: `admin`
|
|
- Password: `Harbor12345` (or your configured password)
|
|
3. **Change the admin password**:
|
|
- Click on the user icon "admin" in the top right corner of the UI
|
|
- Click "Change Password" from the dropdown menu
|
|
- Enter your current password and a new secure password
|
|
- Click "OK" to save the new password
|
|
|
|
#### 5.10 Configure Harbor for Public Read, Authenticated Write
|
|
|
|
1. **Create Application Project**:
|
|
- Go to **Projects** → **New Project**
|
|
- Set **Project Name**: `APP_NAME` (replace with your actual application name)
|
|
- Set **Access Level**: `Public`
|
|
- Click **OK**
|
|
|
|
2. **Create a User for CI/CD**:
|
|
- Go to **Administration** → **Users** → **New User**
|
|
- Set **Username**: `ci-user`
|
|
- Set **Email**: `ci@example.com`
|
|
- Set **Password**: `your-secure-password`
|
|
- Click **OK**
|
|
|
|
3. **Assign Project Role to ci-user**:
|
|
- Go to **Projects** → **APP_NAME** → **Members** → **+ User**
|
|
- Select **User**: `ci-user`
|
|
- Set **Role**: `Developer`
|
|
- Click **OK**
|
|
|
|
**Note**: With a public project, anyone can pull images without authentication, but only authenticated users (like `ci-user`) can push images. This provides the perfect balance of ease of use for deployments and security for image management.
|
|
|
|
#### 5.11 Test Harbor Authentication and Access Model
|
|
|
|
```bash
|
|
# Test Docker login to Harbor
|
|
docker login YOUR_CI_CD_IP
|
|
# Enter: ci-user and your-secure-password
|
|
|
|
# Create a test image
|
|
echo "FROM alpine:latest" > /tmp/test.Dockerfile
|
|
echo "RUN echo 'Hello from Harbor test image'" >> /tmp/test.Dockerfile
|
|
|
|
# Build and tag test image for APP_NAME project
|
|
docker build -f /tmp/test.Dockerfile -t YOUR_CI_CD_IP/APP_NAME/test:latest /tmp
|
|
|
|
# Push to Harbor (requires authentication)
|
|
docker push YOUR_CI_CD_IP/APP_NAME/test:latest
|
|
|
|
# Test public pull (no authentication required)
|
|
docker logout YOUR_CI_CD_IP
|
|
docker pull YOUR_CI_CD_IP/APP_NAME/test:latest
|
|
|
|
# Verify the image was pulled successfully
|
|
docker images | grep APP_NAME/test
|
|
|
|
# Test that unauthorized push is blocked
|
|
echo "FROM alpine:latest" > /tmp/unauthorized.Dockerfile
|
|
echo "RUN echo 'This push should fail'" >> /tmp/unauthorized.Dockerfile
|
|
docker build -f /tmp/unauthorized.Dockerfile -t YOUR_CI_CD_IP/APP_NAME/unauthorized:latest /tmp
|
|
docker push YOUR_CI_CD_IP/APP_NAME/unauthorized:latest
|
|
# Expected: This should fail with authentication error
|
|
|
|
# Clean up test images
|
|
docker rmi YOUR_CI_CD_IP/APP_NAME/test:latest
|
|
docker rmi YOUR_CI_CD_IP/APP_NAME/unauthorized:latest
|
|
```
|
|
|
|
**Expected behavior**:
|
|
- ✅ **Push requires authentication**: `docker push` only works when logged in
|
|
- ✅ **Pull works without authentication**: `docker pull` works without login for public projects
|
|
- ✅ **Unauthorized push is blocked**: `docker push` fails when not logged in
|
|
- ✅ **Web UI accessible**: Harbor UI is available at `https://YOUR_CI_CD_IP`
|
|
|
|
#### 5.12 Harbor Access Model Summary
|
|
|
|
Your Harbor registry is now configured with the following access model:
|
|
|
|
**APP_NAME Project**:
|
|
- ✅ **Pull (read)**: No authentication required
|
|
- ✅ **Push (write)**: Requires authentication
|
|
- ✅ **Web UI**: Accessible to view images
|
|
|
|
**Security Features**:
|
|
- ✅ **Vulnerability scanning**: Automatic CVE scanning with Trivy
|
|
- ✅ **Role-based access control**: Different user roles (admin, developer, guest)
|
|
- ✅ **Audit logs**: Complete trail of all operations
|
|
|
|
#### 5.13 Troubleshooting Common Harbor Issues
|
|
|
|
**Certificate Issues**:
|
|
```bash
|
|
# If you get "tls: failed to verify certificate" errors:
|
|
# 1. Verify certificate has proper SANs
|
|
openssl x509 -in /etc/ssl/registry/registry.crt -text -noout | grep -A 5 "Subject Alternative Name"
|
|
|
|
# 2. Regenerate certificate if SANs are missing
|
|
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/ssl/registry/registry.key -out /etc/ssl/registry/registry.crt -days 365 -nodes -extensions v3_req -config /etc/ssl/registry/openssl.conf
|
|
|
|
# 3. Restart Harbor and Docker
|
|
cd /opt/APP_NAME/harbor && docker compose down && docker compose up -d
|
|
sudo systemctl restart docker
|
|
```
|
|
|
|
**Connection Issues**:
|
|
```bash
|
|
# If you get "connection refused" errors:
|
|
# 1. Check if Harbor is running
|
|
docker compose ps
|
|
|
|
# 2. Check Harbor logs
|
|
docker compose logs
|
|
|
|
# 3. Verify ports are open
|
|
netstat -tuln | grep -E ':(80|443)'
|
|
```
|
|
|
|
**Docker Configuration Issues**:
|
|
```bash
|
|
# If Docker still can't connect after certificate fixes:
|
|
# 1. Verify Docker daemon configuration
|
|
cat /etc/docker/daemon.json
|
|
|
|
# 2. Check if certificate is in system CA store
|
|
ls -la /usr/local/share/ca-certificates/registry.crt
|
|
|
|
# 3. Update CA certificates and restart Docker
|
|
sudo update-ca-certificates
|
|
sudo systemctl restart docker
|
|
```
|
|
|
|
### Step 6: Set Up SSH for Production Communication
|
|
|
|
#### 6.1 Generate SSH Key Pair
|
|
|
|
**Important**: Run this command as the **DEPLOY_USER** (not root or SERVICE_USER). The DEPLOY_USER is responsible for deployment orchestration and SSH communication with the production server.
|
|
|
|
```bash
|
|
ssh-keygen -t ed25519 -C "ci-cd-server" -f ~/.ssh/id_ed25519 -N ""
|
|
```
|
|
|
|
**What this does**:
|
|
- Creates an SSH key pair for secure communication between CI/CD and production servers
|
|
- The DEPLOY_USER uses this key to SSH to the production server for deployments
|
|
- The key is stored in the DEPLOY_USER's home directory for security
|
|
|
|
**Security Note**: The DEPLOY_USER handles deployment orchestration, while the SERVICE_USER runs the actual CI pipeline. This separation provides better security through the principle of least privilege.
|
|
|
|
#### 6.2 Create SSH Config
|
|
|
|
```bash
|
|
cat > ~/.ssh/config << 'EOF'
|
|
Host production
|
|
HostName YOUR_PRODUCTION_IP
|
|
User DEPLOY_USER
|
|
IdentityFile ~/.ssh/id_ed25519
|
|
StrictHostKeyChecking no
|
|
UserKnownHostsFile /dev/null
|
|
EOF
|
|
|
|
chmod 600 ~/.ssh/config
|
|
```
|
|
|
|
### Step 7: Install Forgejo Actions Runner
|
|
|
|
#### 7.1 Download Runner
|
|
|
|
**Important**: Run this step as the **DEPLOY_USER** (not root or SERVICE_USER). The DEPLOY_USER handles deployment tasks including downloading and installing the Forgejo runner.
|
|
|
|
```bash
|
|
cd ~
|
|
|
|
# Get the latest version dynamically
|
|
LATEST_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases | jq -r '.[0].tag_name')
|
|
echo "Downloading Forgejo runner version: $LATEST_VERSION"
|
|
|
|
# Download the latest runner
|
|
wget https://code.forgejo.org/forgejo/runner/releases/download/${LATEST_VERSION}/forgejo-runner-${LATEST_VERSION#v}-linux-amd64
|
|
chmod +x forgejo-runner-${LATEST_VERSION#v}-linux-amd64
|
|
sudo mv forgejo-runner-${LATEST_VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
|
|
```
|
|
|
|
**Alternative: Pin to Specific Version (Recommended for Production)**
|
|
|
|
If you prefer to pin to a specific version for stability, replace the dynamic download with:
|
|
|
|
```bash
|
|
cd ~
|
|
VERSION="v6.3.1" # Pin to specific version
|
|
wget https://code.forgejo.org/forgejo/runner/releases/download/${VERSION}/forgejo-runner-${VERSION#v}-linux-amd64
|
|
chmod +x forgejo-runner-${VERSION#v}-linux-amd64
|
|
sudo mv forgejo-runner-${VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
|
|
```
|
|
|
|
**What this does**:
|
|
- **Dynamic approach**: Downloads the latest stable Forgejo Actions runner
|
|
- **Version pinning**: Allows you to specify a known-good version for production
|
|
- **System installation**: Installs the binary system-wide in `/usr/bin/` for proper Linux structure
|
|
- **Makes the binary executable** and available system-wide
|
|
|
|
**Production Recommendation**: Use version pinning in production environments to ensure consistency and avoid unexpected breaking changes.
|
|
|
|
#### 7.2 Register Runner
|
|
|
|
**Important**: The runner must be registered with your Forgejo instance before it can start. This creates the required `.runner` configuration file.
|
|
|
|
**Step 1: Get Permissions to Create Repository-level Runners**
|
|
|
|
To create a repository-level runner, you need **Repository Admin** or **Owner** permissions. Here's how to check and manage permissions:
|
|
|
|
**Check Your Current Permissions:**
|
|
1. Go to your repository: `https://your-forgejo-instance/your-username/your-repo`
|
|
2. Look for the **Settings** tab in the repository navigation
|
|
3. If you see **Actions** in the left sidebar under Settings, you have the right permissions
|
|
4. If you don't see Settings or Actions, you don't have admin access
|
|
|
|
**Add Repository Admin (Repository Owner Only):**
|
|
|
|
If you're the repository owner and need to give someone else admin access:
|
|
|
|
1. **Go to Repository Settings:**
|
|
- Navigate to your repository
|
|
- Click **Settings** tab
|
|
- Click **Collaborators** in the left sidebar
|
|
|
|
2. **Add Collaborator:**
|
|
- Click **Add Collaborator** button
|
|
- Enter the username or email of the person you want to add
|
|
- Select **Admin** from the role dropdown
|
|
- Click **Add Collaborator**
|
|
|
|
3. **Alternative: Manage Team Access (for Organizations):**
|
|
- Go to **Settings → Collaborators**
|
|
- Click **Manage Team Access**
|
|
- Add the team with **Admin** permissions
|
|
|
|
**Repository Roles and Permissions:**
|
|
|
|
| Role | Can Create Runners | Can Manage Repository | Can Push Code |
|
|
|------|-------------------|----------------------|---------------|
|
|
| **Owner** | ✅ Yes | ✅ Yes | ✅ Yes |
|
|
| **Admin** | ✅ Yes | ✅ Yes | ✅ Yes |
|
|
| **Write** | ❌ No | ❌ No | ✅ Yes |
|
|
| **Read** | ❌ No | ❌ No | ❌ No |
|
|
|
|
**If You Don't Have Permissions:**
|
|
|
|
**Option 1: Ask Repository Owner**
|
|
- Contact the person who owns the repository
|
|
- Ask them to create the runner and share the registration token with you
|
|
|
|
**Option 2: Use Organization/User Runner**
|
|
- If you have access to organization settings, create an org-level runner
|
|
- Or create a user-level runner if you own other repositories
|
|
|
|
**Option 3: Site Admin Help**
|
|
- Contact your Forgejo instance administrator to create a site-level runner
|
|
|
|
**Site Administrator: Setting Repository Admin (Forgejo Instance Admin)**
|
|
|
|
To add an existing user as an Administrator of an existing repository in Forgejo, follow these steps:
|
|
|
|
1. **Go to the repository**: Navigate to the main page of the repository you want to manage.
|
|
2. **Access repository settings**: Click on the "Settings" tab under your repository name.
|
|
3. **Go to Collaborators & teams**: In the sidebar, under the "Access" section, click on "Collaborators & teams".
|
|
4. **Manage access**: Under "Manage access", locate the existing user you want to make an administrator.
|
|
5. **Change their role**: Next to the user's name, select the "Role" dropdown menu and click on "Administrator".
|
|
|
|
**Important Note**: If the user is already the Owner of the repository, then they do not have to add themselves as an Administrator of the repository and indeed cannot. Repository owners automatically have all administrative permissions.
|
|
|
|
**Important Notes for Site Administrators:**
|
|
- **Repository Admin** can manage the repository but cannot modify site-wide settings
|
|
- **Site Admin** retains full control over the Forgejo instance
|
|
- Changes take effect immediately for the user
|
|
- Consider the security implications of granting admin access
|
|
|
|
**Step 2: Get Registration Token**
|
|
1. Go to your Forgejo repository
|
|
2. Navigate to **Settings → Actions → Runners**
|
|
3. Click **"New runner"**
|
|
4. Copy the registration token
|
|
|
|
**Step 3: Register the Runner**
|
|
|
|
```bash
|
|
# Switch to DEPLOY_USER to register the runner
|
|
sudo su - DEPLOY_USER
|
|
|
|
cd ~
|
|
|
|
# Register the runner with your Forgejo instance
|
|
forgejo-runner register \
|
|
--instance https://your-forgejo-instance \
|
|
--token YOUR_REGISTRATION_TOKEN \
|
|
--name "ci-cd-dind-runner" \
|
|
--labels "ubuntu-latest,docker,dind" \
|
|
--no-interactive
|
|
```
|
|
|
|
**Important**: Replace `your-forgejo-instance` with your actual Forgejo instance URL and `YOUR_REGISTRATION_TOKEN` with the token you copied from Step 2. Also make sure it ends in a `/`.
|
|
|
|
**Note**: The `your-forgejo-instance` should be the **base URL** of your Forgejo instance (e.g., `https://git.<your-domain>/`), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.
|
|
|
|
**What this does**:
|
|
- Creates the required `.runner` configuration file in the DEPLOY_USER's home directory
|
|
- Registers the runner with your Forgejo instance
|
|
- Sets up the runner with appropriate labels for Ubuntu and Docker environments
|
|
|
|
**Step 4: Set Up System Configuration**
|
|
|
|
```bash
|
|
# Copy the runner configuration to system location
|
|
sudo cp /home/DEPLOY_USER/.runner /etc/forgejo-runner/.runner
|
|
|
|
# Set proper ownership and permissions
|
|
sudo chown SERVICE_USER:SERVICE_USER /etc/forgejo-runner/.runner
|
|
sudo chmod 600 /etc/forgejo-runner/.runner
|
|
```
|
|
|
|
**Important**: Replace `your-forgejo-instance` with your actual Forgejo instance URL and `YOUR_REGISTRATION_TOKEN` with the token you copied from Step 2.
|
|
|
|
**Note**: The `your-forgejo-instance` should be the **base URL** of your Forgejo instance (e.g., `https://git.<your-domain>/`), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.
|
|
|
|
**What this does**:
|
|
- Creates the required `.runner` configuration file in the DEPLOY_USER's home directory
|
|
- Copies the configuration to the system location (`/etc/forgejo-runner/.runner`)
|
|
- Sets proper ownership and permissions for SERVICE_USER to access the config
|
|
- Registers the runner with your Forgejo instance
|
|
- Sets up the runner with appropriate labels for Ubuntu and Docker environments
|
|
|
|
**Step 5: Create and Enable Systemd Service**
|
|
|
|
```bash
|
|
# Create system config directory for Forgejo runner
|
|
sudo mkdir -p /etc/forgejo-runner
|
|
|
|
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
|
|
[Unit]
|
|
Description=Forgejo Actions Runner
|
|
After=network.target
|
|
|
|
[Service]
|
|
Type=simple
|
|
User=SERVICE_USER
|
|
WorkingDirectory=/etc/forgejo-runner
|
|
ExecStart=/usr/bin/forgejo-runner daemon
|
|
Restart=always
|
|
RestartSec=10
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
|
|
# Enable the service
|
|
sudo systemctl daemon-reload
|
|
sudo systemctl enable forgejo-runner.service
|
|
```
|
|
|
|
**What this does**:
|
|
- Creates the systemd service configuration for the Forgejo runner
|
|
- Sets the working directory to `/etc/forgejo-runner` where the `.runner` file is located
|
|
- Enables the service to start automatically on boot
|
|
- Sets up proper restart behavior for reliability
|
|
|
|
#### 7.3 Start Service
|
|
|
|
```bash
|
|
# Start the Forgejo runner service
|
|
sudo systemctl start forgejo-runner.service
|
|
|
|
# Verify the service is running
|
|
sudo systemctl status forgejo-runner.service
|
|
```
|
|
|
|
**Expected Output**: The service should show "active (running)" status.
|
|
|
|
**What this does**:
|
|
- Starts the Forgejo runner daemon as a system service
|
|
- The runner will now be available to accept and execute workflows from your Forgejo instance
|
|
- The service will automatically restart if it crashes or the system reboots
|
|
|
|
#### 7.4 Test Runner Configuration
|
|
|
|
```bash
|
|
# Check if the runner is running
|
|
sudo systemctl status forgejo-runner.service
|
|
|
|
# Check runner logs
|
|
sudo journalctl -u forgejo-runner.service -f --no-pager
|
|
|
|
# Verify runner appears in Forgejo
|
|
# Go to your Forgejo repository → Settings → Actions → Runners
|
|
# You should see your runner listed as "ci-cd-dind-runner" with status "Online"
|
|
```
|
|
|
|
**Expected Output**:
|
|
- `systemctl status` should show "active (running)"
|
|
- Forgejo web interface should show the runner as online
|
|
|
|
**If something goes wrong**:
|
|
- Check logs: `sudo journalctl -u forgejo-runner.service -f`
|
|
- Verify token: Make sure the registration token is correct
|
|
- Check network: Ensure the runner can reach your Forgejo instance
|
|
- Restart service: `sudo systemctl restart forgejo-runner.service`
|
|
|
|
### Step 8: Set Up Docker-in-Docker (DinD) for CI Operations
|
|
|
|
**Important**: This step sets up a Docker-in-Docker container that provides an isolated environment for CI/CD operations, eliminating resource contention with Harbor and simplifying cleanup.
|
|
|
|
#### 8.1 Create Containerized CI/CD Environment
|
|
|
|
```bash
|
|
# Start DinD container using the repository's CI/CD Docker Compose file
|
|
cd /opt/APP_NAME
|
|
docker compose -f ci-cd-compose.yml up -d
|
|
|
|
# Wait for DinD to be ready
|
|
echo "Waiting for DinD container to be ready..."
|
|
timeout 60 bash -c 'until docker compose -f ci-cd-compose.yml ps | grep -q "healthy"; do sleep 2; done'
|
|
|
|
# Test DinD connectivity
|
|
docker exec ci-cd-dind docker version
|
|
```
|
|
|
|
**What this does**:
|
|
- **Creates isolated DinD environment**: Provides isolated Docker environment for all CI/CD operations
|
|
- **Persistent storage**: `ci-cd-data` volume preserves data between restarts
|
|
- **Health checks**: Ensures DinD is fully ready before proceeding
|
|
- **Configuration management**: Uses the repository's `ci-cd-compose.yml` file
|
|
|
|
#### 8.2 Configure DinD for Harbor Registry
|
|
|
|
```bash
|
|
# Configure Docker daemon in DinD for Harbor registry
|
|
docker exec ci-cd-dind sh -c 'echo "{\"insecure-registries\": [\"localhost:5000\"]}" > /etc/docker/daemon.json'
|
|
|
|
# Reload Docker daemon in DinD
|
|
docker exec ci-cd-dind sh -c 'kill -HUP 1'
|
|
|
|
# Wait for Docker daemon to reload
|
|
sleep 5
|
|
|
|
# Test Harbor connectivity from DinD
|
|
docker exec ci-cd-dind docker pull alpine:latest
|
|
docker exec ci-cd-dind docker tag alpine:latest localhost:5000/test/alpine:latest
|
|
docker exec ci-cd-dind docker push localhost:5000/test/alpine:latest
|
|
|
|
# Clean up test image
|
|
docker exec ci-cd-dind docker rmi localhost:5000/test/alpine:latest
|
|
```
|
|
|
|
**What this does**:
|
|
- **Configures insecure registry**: Allows DinD to push to Harbor without SSL verification
|
|
- **Tests connectivity**: Verifies DinD can pull, tag, and push images to Harbor
|
|
- **Validates setup**: Ensures the complete CI/CD pipeline will work
|
|
|
|
#### 8.3 DinD Environment Management
|
|
|
|
The DinD container is managed as an isolated environment where all CI/CD operations run inside the DinD container, providing complete isolation from the host system.
|
|
|
|
**Key Benefits:**
|
|
- **🧹 Complete Isolation**: All testing and building runs inside DinD
|
|
- **🚫 No Host Contamination**: No containers run directly on host Docker
|
|
- **⚡ Consistent Environment**: Same isolation level for all operations
|
|
- **🎯 Resource Isolation**: CI/CD operations can't interfere with host services
|
|
- **🔄 Parallel Safety**: Multiple operations can run safely
|
|
|
|
**How it works:**
|
|
- **Job 1 (Testing)**: Creates PostgreSQL, Rust, and Node.js containers inside DinD
|
|
- **Job 2 (Building)**: Uses DinD directly for building and pushing Docker images to Harbor
|
|
- **Job 3 (Deployment)**: Production runner pulls images from Harbor and deploys using `docker-compose.prod.yml`
|
|
|
|
**Testing DinD Setup:**
|
|
|
|
```bash
|
|
# Test DinD functionality
|
|
docker exec ci-cd-dind docker run --rm alpine:latest echo "DinD is working!"
|
|
|
|
# Test Harbor integration
|
|
docker exec ci-cd-dind docker pull alpine:latest
|
|
docker exec ci-cd-dind docker tag alpine:latest localhost:5000/test/dind-test:latest
|
|
docker exec ci-cd-dind docker push localhost:5000/test/dind-test:latest
|
|
|
|
# Clean up test
|
|
docker exec ci-cd-dind docker rmi localhost:5000/test/dind-test:latest
|
|
```
|
|
|
|
**Expected Output**:
|
|
- DinD container should be running and accessible
|
|
- Docker commands should work inside DinD
|
|
- Harbor push/pull should work from DinD
|
|
|
|
#### 8.4 Production Deployment Architecture
|
|
|
|
The production deployment uses a separate Docker Compose file (`docker-compose.prod.yml`) that pulls built images from the Harbor registry and deploys the complete application stack.
|
|
|
|
**Production Stack Components:**
|
|
- **PostgreSQL**: Production database with persistent storage
|
|
- **Backend**: Rust application built and pushed from CI/CD
|
|
- **Frontend**: Next.js application built and pushed from CI/CD
|
|
- **Nginx**: Reverse proxy with SSL termination
|
|
|
|
**Deployment Flow:**
|
|
1. **Production Runner**: Runs on Production Linode with `production` label
|
|
2. **Image Pull**: Pulls latest images from Harbor registry on CI Linode
|
|
3. **Stack Deployment**: Uses `docker-compose.prod.yml` to deploy complete stack
|
|
4. **Health Verification**: Ensures all services are healthy before completion
|
|
|
|
**Key Benefits:**
|
|
- **🔄 Image Registry**: Centralized image storage in Harbor
|
|
- **📦 Consistent Deployment**: Same images tested in CI are deployed to production
|
|
- **⚡ Fast Deployment**: Only pulls changed images
|
|
- **🛡️ Rollback Capability**: Can easily rollback to previous image versions
|
|
- **📊 Health Monitoring**: Built-in health checks for all services
|
|
|
|
#### 8.5 Monitoring Script
|
|
|
|
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for both CI/CD and production monitoring.
|
|
|
|
**Repository Script**:
|
|
- `scripts/monitor.sh` - Comprehensive monitoring script with support for both CI/CD and production environments
|
|
|
|
**To use the repository monitoring script**:
|
|
```bash
|
|
# The repository is already cloned at /opt/APP_NAME/
|
|
cd /opt/APP_NAME
|
|
|
|
# Make the script executable
|
|
chmod +x scripts/monitor.sh
|
|
|
|
# Test CI/CD monitoring
|
|
./scripts/monitor.sh --type ci-cd
|
|
|
|
# Test production monitoring (if you have a production setup)
|
|
./scripts/monitor.sh --type production
|
|
```
|
|
|
|
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
|
|
|
|
### Step 9: Configure Firewall
|
|
|
|
```bash
|
|
sudo ufw --force enable
|
|
sudo ufw default deny incoming
|
|
sudo ufw default allow outgoing
|
|
sudo ufw allow ssh
|
|
sudo ufw allow 443/tcp # Harbor registry (public read access)
|
|
```
|
|
|
|
**Security Model**:
|
|
- **Port 443 (Harbor)**: Public read access for public projects, authenticated write access
|
|
- **SSH**: Restricted to your IP addresses
|
|
- **All other ports**: Blocked
|
|
|
|
### Step 10: Test CI/CD Setup
|
|
|
|
#### 10.1 Test Docker Installation
|
|
|
|
```bash
|
|
docker --version
|
|
docker compose --version
|
|
```
|
|
|
|
#### 10.2 Check Harbor Status
|
|
|
|
```bash
|
|
cd /opt/APP_NAME/registry
|
|
docker compose ps
|
|
```
|
|
|
|
#### 10.3 Test Harbor Access
|
|
|
|
```bash
|
|
# Test Harbor API
|
|
curl -k https://localhost:8080/api/v2.0/health
|
|
|
|
# Test Harbor UI
|
|
curl -k -I https://localhost
|
|
```
|
|
|
|
#### 10.4 Get Public Key for Production Server
|
|
|
|
```bash
|
|
cat ~/.ssh/id_ed25519.pub
|
|
```
|
|
|
|
**Important**: Copy this public key - you'll need it for the production server setup.
|
|
|
|
---
|
|
|
|
## Part 2: Production Linode Setup
|
|
|
|
### Step 11: Initial System Setup
|
|
|
|
#### 11.1 Update the System
|
|
|
|
```bash
|
|
sudo apt update && sudo apt upgrade -y
|
|
```
|
|
|
|
#### 11.2 Configure Timezone
|
|
|
|
```bash
|
|
# Configure timezone interactively
|
|
sudo dpkg-reconfigure tzdata
|
|
|
|
# Verify timezone setting
|
|
date
|
|
```
|
|
|
|
**What this does**: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
|
|
|
|
**Expected output**: After selecting your timezone, the `date` command should show the current date and time in your selected timezone.
|
|
|
|
#### 11.3 Configure /etc/hosts
|
|
|
|
```bash
|
|
# Add localhost entries for both IPv4 and IPv6
|
|
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
|
|
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
|
|
echo "YOUR_PRODUCTION_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
echo "YOUR_PRODUCTION_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
|
|
# Verify the configuration
|
|
cat /etc/hosts
|
|
```
|
|
|
|
**What this does**:
|
|
- Adds localhost entries for both IPv4 and IPv6 addresses to `/etc/hosts`
|
|
- Ensures proper localhost resolution for both IPv4 and IPv6
|
|
|
|
**Important**: Replace `YOUR_PRODUCTION_IPV4_ADDRESS` and `YOUR_PRODUCTION_IPV6_ADDRESS` with the actual IPv4 and IPv6 addresses of your Production Linode obtained from your Linode dashboard.
|
|
|
|
**Expected output**: The `/etc/hosts` file should show entries for `127.0.0.1`, `::1`, and your Linode's actual IP addresses all mapping to `localhost`.
|
|
|
|
#### 11.4 Install Essential Packages
|
|
|
|
```bash
|
|
sudo apt install -y \
|
|
curl \
|
|
wget \
|
|
git \
|
|
ca-certificates \
|
|
apt-transport-https \
|
|
software-properties-common \
|
|
ufw \
|
|
fail2ban \
|
|
htop \
|
|
nginx \
|
|
certbot \
|
|
python3-certbot-nginx
|
|
```
|
|
|
|
### Step 12: Create Users
|
|
|
|
#### 12.1 Create the SERVICE_USER User
|
|
|
|
```bash
|
|
# Create dedicated group for the service account
|
|
sudo groupadd -r SERVICE_USER
|
|
|
|
# Create service account user with dedicated group
|
|
sudo useradd -r -g SERVICE_USER -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
|
|
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
```
|
|
|
|
#### 12.2 Create the DEPLOY_USER User
|
|
|
|
```bash
|
|
# Create deployment user
|
|
sudo useradd -m -s /bin/bash DEPLOY_USER
|
|
sudo usermod -aG sudo DEPLOY_USER
|
|
echo "DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
```
|
|
|
|
#### 12.3 Verify Users
|
|
|
|
```bash
|
|
sudo su - SERVICE_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
|
|
sudo su - DEPLOY_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
```
|
|
|
|
### Step 13: Install Docker
|
|
|
|
#### 13.1 Add Docker Repository
|
|
|
|
```bash
|
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
sudo apt update
|
|
```
|
|
|
|
#### 13.2 Install Docker Packages
|
|
|
|
```bash
|
|
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
|
|
```
|
|
|
|
#### 13.3 Configure Docker for Service Account
|
|
|
|
```bash
|
|
sudo usermod -aG docker SERVICE_USER
|
|
```
|
|
|
|
### Step 14: Install Docker Compose
|
|
|
|
```bash
|
|
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
|
sudo chmod +x /usr/local/bin/docker-compose
|
|
```
|
|
|
|
### Step 15: Configure Security
|
|
|
|
#### 15.1 Configure Firewall
|
|
|
|
```bash
|
|
sudo ufw --force enable
|
|
sudo ufw default deny incoming
|
|
sudo ufw default allow outgoing
|
|
sudo ufw allow ssh
|
|
sudo ufw allow 80/tcp
|
|
sudo ufw allow 443/tcp
|
|
sudo ufw allow 3000/tcp
|
|
sudo ufw allow 3001/tcp
|
|
```
|
|
|
|
#### 15.2 Configure Fail2ban
|
|
|
|
```bash
|
|
sudo systemctl enable fail2ban
|
|
sudo systemctl start fail2ban
|
|
```
|
|
|
|
### Step 16: Create Application Directory
|
|
|
|
#### 16.1 Create Directory Structure
|
|
|
|
```bash
|
|
sudo mkdir -p /opt/APP_NAME
|
|
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME
|
|
```
|
|
|
|
**Note**: Replace `APP_NAME` with your actual application name. This directory name can be controlled via the `APP_NAME` secret in your Forgejo repository settings. If you set the `APP_NAME` secret to `myapp`, the deployment directory will be `/opt/myapp`.
|
|
|
|
#### 16.2 Create SSL Directory (Optional - for domain users)
|
|
|
|
```bash
|
|
sudo mkdir -p /opt/APP_NAME/nginx/ssl
|
|
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME/nginx/ssl
|
|
```
|
|
|
|
### Step 17: Clone Repository and Set Up Application Files
|
|
|
|
#### 17.1 Switch to SERVICE_USER User
|
|
|
|
```bash
|
|
sudo su - SERVICE_USER
|
|
```
|
|
|
|
#### 17.2 Clone Repository
|
|
|
|
```bash
|
|
cd /opt/APP_NAME
|
|
git clone https://your-forgejo-instance/your-username/APP_NAME.git .
|
|
```
|
|
|
|
**Important**: The repository includes a pre-configured `nginx/nginx.conf` file that handles both SSL and non-SSL scenarios, with proper security headers, rate limiting, and CORS configuration. This file will be automatically used by the Docker Compose setup.
|
|
|
|
**Important**: The repository also includes a pre-configured `.forgejo/workflows/ci.yml` file that handles the complete CI/CD pipeline including testing, building, and deployment. This workflow is already set up to work with the private registry and production deployment.
|
|
|
|
**Note**: Replace `your-forgejo-instance` and `your-username/APP_NAME` with your actual Forgejo instance URL and repository path.
|
|
|
|
#### 17.3 Create Environment File
|
|
|
|
The repository doesn't include a `.env.example` file for security reasons. The CI/CD pipeline will create the `.env` file dynamically during deployment. However, for manual testing or initial setup, you can create a basic `.env` file:
|
|
|
|
```bash
|
|
cat > /opt/APP_NAME/.env << 'EOF'
|
|
# Production Environment Variables
|
|
POSTGRES_PASSWORD=your_secure_password_here
|
|
REGISTRY=YOUR_CI_CD_IP:8080
|
|
IMAGE_NAME=APP_NAME
|
|
IMAGE_TAG=latest
|
|
|
|
# Database Configuration
|
|
POSTGRES_DB=sharenet
|
|
POSTGRES_USER=sharenet
|
|
DATABASE_URL=postgresql://sharenet:your_secure_password_here@postgres:5432/sharenet
|
|
|
|
# Application Configuration
|
|
NODE_ENV=production
|
|
RUST_LOG=info
|
|
RUST_BACKTRACE=1
|
|
EOF
|
|
```
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address and `your_secure_password_here` with a strong password.
|
|
|
|
#### 17.4 Configure Docker for Harbor Access
|
|
|
|
```bash
|
|
# Add the CI/CD Harbor registry to Docker's insecure registries
|
|
sudo mkdir -p /etc/docker
|
|
sudo tee /etc/docker/daemon.json << EOF
|
|
{
|
|
"insecure-registries": ["YOUR_CI_CD_IP:8080"]
|
|
}
|
|
EOF
|
|
|
|
# Restart Docker to apply changes
|
|
sudo systemctl restart docker
|
|
```
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
|
|
|
|
### Step 18: Set Up SSH Key Authentication
|
|
|
|
#### 18.1 Add CI/CD Public Key
|
|
|
|
```bash
|
|
# Create .ssh directory for SERVICE_USER
|
|
mkdir -p ~/.ssh
|
|
chmod 700 ~/.ssh
|
|
|
|
# Add the CI/CD public key (copy from CI/CD Linode)
|
|
echo "YOUR_CI_CD_PUBLIC_KEY" >> ~/.ssh/authorized_keys
|
|
chmod 600 ~/.ssh/authorized_keys
|
|
```
|
|
|
|
**Important**: Replace `YOUR_CI_CD_PUBLIC_KEY` with the public key from the CI/CD Linode (the output from `cat ~/.ssh/id_ed25519.pub` on the CI/CD Linode).
|
|
|
|
#### 18.2 Test SSH Connection
|
|
|
|
From the CI/CD Linode, test the SSH connection:
|
|
|
|
```bash
|
|
ssh production
|
|
```
|
|
|
|
**Expected output**: You should be able to SSH to the production server without a password prompt.
|
|
|
|
### Step 19: Set Up Forgejo Runner for Production Deployment
|
|
|
|
**Important**: The Production Linode needs a Forgejo runner to execute the deployment job from the CI/CD workflow. This runner will pull images from Harbor and deploy using `docker-compose.prod.yml`.
|
|
|
|
#### 19.1 Install Forgejo Runner
|
|
|
|
```bash
|
|
# Download the latest Forgejo runner
|
|
wget -O forgejo-runner https://codeberg.org/forgejo/runner/releases/download/v4.0.0/forgejo-runner-linux-amd64
|
|
|
|
# Make it executable
|
|
chmod +x forgejo-runner
|
|
|
|
# Move to system location
|
|
sudo mv forgejo-runner /usr/bin/forgejo-runner
|
|
|
|
# Verify installation
|
|
forgejo-runner --version
|
|
```
|
|
|
|
#### 19.2 Create Runner User and Directory
|
|
|
|
```bash
|
|
# Create dedicated user for the runner
|
|
sudo useradd -r -s /bin/bash -m -d /home/forgejo-runner forgejo-runner
|
|
|
|
# Create runner directory
|
|
sudo mkdir -p /opt/forgejo-runner
|
|
sudo chown forgejo-runner:forgejo-runner /opt/forgejo-runner
|
|
|
|
# Add runner user to docker group
|
|
sudo usermod -aG docker forgejo-runner
|
|
```
|
|
|
|
#### 19.3 Get Registration Token
|
|
|
|
1. Go to your Forgejo repository
|
|
2. Navigate to **Settings → Actions → Runners**
|
|
3. Click **"New runner"**
|
|
4. Copy the registration token
|
|
|
|
#### 19.4 Register the Production Runner
|
|
|
|
```bash
|
|
# Switch to runner user
|
|
sudo su - forgejo-runner
|
|
|
|
# Register the runner with production label
|
|
forgejo-runner register \
|
|
--instance https://your-forgejo-instance \
|
|
--token YOUR_REGISTRATION_TOKEN \
|
|
--name "production-runner" \
|
|
--labels "production,ubuntu-latest,docker" \
|
|
--no-interactive
|
|
|
|
# Copy configuration to system location
|
|
sudo cp /home/forgejo-runner/.runner /opt/forgejo-runner/.runner
|
|
sudo chown forgejo-runner:forgejo-runner /opt/forgejo-runner/.runner
|
|
sudo chmod 600 /opt/forgejo-runner/.runner
|
|
```
|
|
|
|
**Important**: Replace `your-forgejo-instance` with your actual Forgejo instance URL and `YOUR_REGISTRATION_TOKEN` with the token you copied from Step 19.3.
|
|
|
|
#### 19.5 Create Systemd Service
|
|
|
|
```bash
|
|
# Create systemd service file
|
|
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
|
|
[Unit]
|
|
Description=Forgejo Actions Runner (Production)
|
|
After=network.target docker.service
|
|
|
|
[Service]
|
|
Type=simple
|
|
User=forgejo-runner
|
|
WorkingDirectory=/opt/forgejo-runner
|
|
ExecStart=/usr/bin/forgejo-runner daemon
|
|
Restart=always
|
|
RestartSec=10
|
|
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
|
|
# Enable and start the service
|
|
sudo systemctl daemon-reload
|
|
sudo systemctl enable forgejo-runner.service
|
|
sudo systemctl start forgejo-runner.service
|
|
|
|
# Verify the service is running
|
|
sudo systemctl status forgejo-runner.service
|
|
```
|
|
|
|
#### 19.6 Test Runner Configuration
|
|
|
|
```bash
|
|
# Check if the runner is running
|
|
sudo systemctl status forgejo-runner.service
|
|
|
|
# Check runner logs
|
|
sudo journalctl -u forgejo-runner.service -f --no-pager
|
|
|
|
# Verify runner appears in Forgejo
|
|
# Go to your Forgejo repository → Settings → Actions → Runners
|
|
# You should see your runner listed as "production-runner" with status "Online"
|
|
```
|
|
|
|
**Expected Output**:
|
|
- `systemctl status` should show "active (running)"
|
|
- Forgejo web interface should show the runner as online with "production" label
|
|
|
|
**Important**: The CI/CD workflow (`.forgejo/workflows/ci.yml`) is already configured to use this production runner. The deploy job runs on `runs-on: [self-hosted, production]`, which means it will execute on any runner with the "production" label. When the workflow runs, it will:
|
|
|
|
1. Pull the latest Docker images from Harbor registry
|
|
2. Use the `docker-compose.prod.yml` file to deploy the application stack
|
|
3. Create the necessary environment variables for production deployment
|
|
4. Verify that all services are healthy after deployment
|
|
|
|
The production runner will automatically handle the deployment process when you push to the main branch.
|
|
|
|
#### 19.7 Understanding the Production Docker Compose Setup
|
|
|
|
The `docker-compose.prod.yml` file is specifically designed for production deployment and differs from development setups:
|
|
|
|
**Key Features**:
|
|
- **Image-based deployment**: Uses pre-built images from Harbor registry instead of building from source
|
|
- **Production networking**: All services communicate through a dedicated `sharenet-network`
|
|
- **Health checks**: Each service includes health checks to ensure proper startup order
|
|
- **Nginx reverse proxy**: Includes Nginx for SSL termination, load balancing, and security headers
|
|
- **Persistent storage**: PostgreSQL data is stored in a named volume for persistence
|
|
- **Environment variables**: Uses environment variables for configuration (set by the CI/CD workflow)
|
|
|
|
**Service Architecture**:
|
|
1. **PostgreSQL**: Database with health checks and persistent storage
|
|
2. **Backend**: Rust API service that waits for PostgreSQL to be healthy
|
|
3. **Frontend**: Next.js application that waits for backend to be healthy
|
|
4. **Nginx**: Reverse proxy that serves the frontend and proxies API requests to backend
|
|
|
|
**Deployment Process**:
|
|
1. The production runner pulls the latest images from Harbor registry
|
|
2. Creates environment variables for the deployment
|
|
3. Runs `docker compose -f docker-compose.prod.yml up -d`
|
|
4. Waits for all services to be healthy
|
|
5. Verifies the deployment was successful
|
|
|
|
### Step 20: Test Production Setup
|
|
|
|
#### 20.1 Test Docker Installation
|
|
|
|
```bash
|
|
docker --version
|
|
docker compose --version
|
|
```
|
|
|
|
#### 20.2 Test Harbor Access
|
|
|
|
```bash
|
|
# Test pulling an image from the CI/CD Harbor registry
|
|
docker pull YOUR_CI_CD_IP:8080/public/backend:latest
|
|
```
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
|
|
|
|
#### 20.3 Test Application Deployment
|
|
|
|
```bash
|
|
cd /opt/APP_NAME
|
|
docker compose up -d
|
|
```
|
|
|
|
#### 20.4 Verify Application Status
|
|
|
|
```bash
|
|
docker compose ps
|
|
curl http://localhost:3000
|
|
curl http://localhost:3001/health
|
|
```
|
|
|
|
**Expected Output**:
|
|
- All containers should be running
|
|
- Frontend should be accessible on port 3000
|
|
- Backend health check should return 200 OK
|
|
|
|
---
|
|
|
|
## Part 3: Final Configuration and Testing
|
|
|
|
### Step 21: Configure Forgejo Repository Secrets
|
|
|
|
#### 21.1 Required Repository Secrets
|
|
|
|
Go to your Forgejo repository and add these secrets in **Settings → Secrets and Variables → Actions**:
|
|
|
|
**Required Secrets:**
|
|
- `CI_CD_IP`: Your CI/CD Linode IP address
|
|
- `PRODUCTION_IP`: Your Production Linode IP address
|
|
- `DEPLOY_USER`: The deployment user name (e.g., `deploy`, `ci`, `admin`)
|
|
- `SERVICE_USER`: The service user name (e.g., `appuser`, `service`, `app`)
|
|
- `APP_NAME`: Your application name (e.g., `sharenet`, `myapp`)
|
|
- `POSTGRES_PASSWORD`: A strong password for the PostgreSQL database
|
|
|
|
**Optional Secrets (for domain users):**
|
|
- `DOMAIN`: Your domain name (e.g., `example.com`)
|
|
- `EMAIL`: Your email for SSL certificate notifications
|
|
|
|
#### 21.2 Configure Forgejo Actions Runner
|
|
|
|
##### 21.2.1 Get Runner Token
|
|
|
|
1. Go to your Forgejo repository
|
|
2. Navigate to **Settings → Actions → Runners**
|
|
3. Click **"New runner"**
|
|
4. Copy the registration token
|
|
|
|
##### 21.2.2 Configure Runner
|
|
|
|
```bash
|
|
# Switch to DEPLOY_USER on CI/CD Linode
|
|
sudo su - DEPLOY_USER
|
|
|
|
# Get the registration token from your Forgejo repository
|
|
# Go to Settings → Actions → Runners → New runner
|
|
# Copy the registration token
|
|
|
|
# Configure the runner
|
|
forgejo-runner register \
|
|
--instance https://your-forgejo-instance \
|
|
--token YOUR_TOKEN \
|
|
--name "ci-cd-dind-runner" \
|
|
--labels "ubuntu-latest,docker,dind" \
|
|
--no-interactive
|
|
```
|
|
|
|
##### 21.2.3 Start Runner
|
|
|
|
```bash
|
|
sudo systemctl start forgejo-runner.service
|
|
sudo systemctl status forgejo-runner.service
|
|
```
|
|
|
|
##### 21.2.4 Test Runner Configuration
|
|
|
|
```bash
|
|
# Check if the runner is running
|
|
sudo systemctl status forgejo-runner.service
|
|
|
|
# Check runner logs
|
|
sudo journalctl -u forgejo-runner.service -f --no-pager
|
|
|
|
# Verify runner appears in Forgejo
|
|
# Go to your Forgejo repository → Settings → Actions → Runners
|
|
# You should see your runner listed as "ci-cd-dind-runner" with status "Online"
|
|
```
|
|
|
|
**Expected Output**:
|
|
- `systemctl status` should show "active (running)"
|
|
- Forgejo web interface should show the runner as online
|
|
|
|
### Step 22: Set Up Monitoring and Cleanup
|
|
|
|
#### 22.1 Monitoring Script
|
|
|
|
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for both CI/CD and production monitoring.
|
|
|
|
**Repository Script**:
|
|
- `scripts/monitor.sh` - Comprehensive monitoring script with support for both CI/CD and production environments
|
|
|
|
**To use the repository monitoring script**:
|
|
```bash
|
|
# The repository is already cloned at /opt/APP_NAME/
|
|
cd /opt/APP_NAME
|
|
|
|
# Make the script executable
|
|
chmod +x scripts/monitor.sh
|
|
|
|
# Test CI/CD monitoring
|
|
./scripts/monitor.sh --type ci-cd
|
|
|
|
# Test production monitoring (if you have a production setup)
|
|
./scripts/monitor.sh --type production
|
|
```
|
|
|
|
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
|
|
|
|
#### 22.2 DinD Cleanup Script
|
|
|
|
**Important**: With the DinD setup, CI/CD operations are isolated in the DinD container. This means we can use a much simpler cleanup approach - just restart the DinD container for a fresh environment.
|
|
|
|
**DinD Cleanup Script**:
|
|
- `scripts/dind-cleanup.sh` - Simple script to restart DinD container for fresh CI environment
|
|
|
|
**To use the DinD cleanup script**:
|
|
```bash
|
|
# The repository is already cloned at /opt/APP_NAME/
|
|
cd /opt/APP_NAME
|
|
|
|
# Make the script executable
|
|
chmod +x scripts/dind-cleanup.sh
|
|
|
|
# Test DinD cleanup (dry run first)
|
|
./scripts/dind-cleanup.sh --dry-run
|
|
|
|
# Run DinD cleanup
|
|
./scripts/dind-cleanup.sh
|
|
```
|
|
|
|
**Benefits of DinD cleanup**:
|
|
- ✅ **Simple operation**: Just restart the DinD container
|
|
- ✅ **Zero Harbor impact**: Harbor registry is completely unaffected
|
|
- ✅ **Fresh environment**: Every cleanup gives a completely clean state
|
|
- ✅ **Fast execution**: No complex resource scanning needed
|
|
- ✅ **Reliable**: No risk of accidentally removing Harbor resources
|
|
|
|
#### 22.3 Test DinD Cleanup Script
|
|
|
|
```bash
|
|
# Test DinD cleanup with dry run first
|
|
./scripts/dind-cleanup.sh --dry-run
|
|
|
|
# Run the DinD cleanup script
|
|
./scripts/dind-cleanup.sh
|
|
|
|
# Verify DinD is working after cleanup
|
|
docker exec ci-cd-dind docker version
|
|
docker exec ci-cd-dind docker run --rm alpine:latest echo "DinD cleanup successful!"
|
|
```
|
|
|
|
**Expected Output**:
|
|
- DinD cleanup script should run without errors
|
|
- DinD container should be restarted with fresh environment
|
|
- Docker commands should work inside DinD after cleanup
|
|
- Harbor registry should remain completely unaffected
|
|
|
|
**If something goes wrong**:
|
|
- Check script permissions: `ls -la scripts/dind-cleanup.sh`
|
|
- Verify DinD container: `docker ps | grep ci-cd-dind`
|
|
- Check DinD logs: `docker logs ci-cd-dind`
|
|
- Run manually: `bash -x scripts/dind-cleanup.sh`
|
|
|
|
#### 22.4 Set Up Automated DinD Cleanup
|
|
|
|
```bash
|
|
# Create a cron job to run DinD cleanup daily at 2 AM
|
|
(crontab -l 2>/dev/null; echo "0 2 * * * cd /opt/APP_NAME && ./scripts/dind-cleanup.sh >> /tmp/dind-cleanup.log 2>&1") | crontab -
|
|
|
|
# Verify the cron job was added
|
|
crontab -l
|
|
```
|
|
|
|
**What this does:**
|
|
- **Runs automatically**: The DinD cleanup script runs every day at 2:00 AM
|
|
- **Frequency**: Daily cleanup to prevent CI/CD resource buildup
|
|
- **Logging**: All cleanup output is logged to `/tmp/dind-cleanup.log`
|
|
- **What it cleans**: Restarts DinD container for fresh CI environment
|
|
- **Zero Harbor impact**: Harbor registry operations are completely unaffected
|
|
|
|
### Step 22: Test Complete Pipeline
|
|
|
|
#### 22.1 Trigger a Test Build
|
|
|
|
1. **Make a small change** to your repository (e.g., update a comment or add a test file)
|
|
2. **Commit and push** the changes to trigger the CI/CD pipeline
|
|
3. **Monitor the build** in your Forgejo repository → Actions tab
|
|
|
|
#### 22.2 Verify Pipeline Steps
|
|
|
|
The pipeline should execute these steps in order:
|
|
|
|
1. **Checkout**: Clone the repository
|
|
2. **Setup DinD**: Configure Docker-in-Docker environment
|
|
3. **Test Backend**: Run backend tests in isolated environment
|
|
4. **Test Frontend**: Run frontend tests in isolated environment
|
|
5. **Build Backend**: Build backend Docker image in DinD
|
|
6. **Build Frontend**: Build frontend Docker image in DinD
|
|
7. **Push to Registry**: Push images to Harbor registry from DinD
|
|
8. **Deploy to Production**: Deploy to production server
|
|
|
|
#### 22.3 Check Harbor
|
|
|
|
```bash
|
|
# On CI/CD Linode
|
|
cd /opt/APP_NAME/registry
|
|
|
|
# Check if new images were pushed
|
|
curl -k https://localhost:8080/v2/_catalog
|
|
|
|
# Check specific repository tags
|
|
curl -k https://localhost:8080/v2/public/backend/tags/list
|
|
curl -k https://localhost:8080/v2/public/frontend/tags/list
|
|
```
|
|
|
|
#### 22.4 Verify Production Deployment
|
|
|
|
```bash
|
|
# On Production Linode
|
|
cd /opt/APP_NAME
|
|
|
|
# Check if containers are running with new images
|
|
docker compose ps
|
|
|
|
# Check application health
|
|
curl http://localhost:3000
|
|
curl http://localhost:3001/health
|
|
|
|
# Check container logs for any errors
|
|
docker compose logs backend
|
|
docker compose logs frontend
|
|
```
|
|
|
|
#### 22.5 Test Application Functionality
|
|
|
|
1. **Frontend**: Visit your production URL (IP or domain)
|
|
2. **Backend API**: Test API endpoints
|
|
3. **Database**: Verify database connections
|
|
4. **Logs**: Check for any errors in application logs
|
|
|
|
### Step 23: Set Up SSL/TLS (Optional - Domain Users)
|
|
|
|
#### 23.1 Install SSL Certificate
|
|
|
|
If you have a domain pointing to your Production Linode:
|
|
|
|
```bash
|
|
# On Production Linode
|
|
sudo certbot --nginx -d your-domain.com
|
|
|
|
# Verify certificate
|
|
sudo certbot certificates
|
|
```
|
|
|
|
#### 23.2 Configure Auto-Renewal
|
|
|
|
```bash
|
|
# Test auto-renewal
|
|
sudo certbot renew --dry-run
|
|
|
|
# Add to crontab for automatic renewal
|
|
sudo crontab -e
|
|
# Add this line:
|
|
# 0 12 * * * /usr/bin/certbot renew --quiet
|
|
```
|
|
|
|
### Step 24: Final Verification
|
|
|
|
#### 24.1 Security Check
|
|
|
|
```bash
|
|
# Check firewall status
|
|
sudo ufw status
|
|
|
|
# Check fail2ban status
|
|
sudo systemctl status fail2ban
|
|
|
|
# Check SSH access (should be key-based only)
|
|
sudo grep "PasswordAuthentication" /etc/ssh/sshd_config
|
|
```
|
|
|
|
#### 24.2 Performance Check
|
|
|
|
```bash
|
|
# Check system resources
|
|
htop
|
|
|
|
# Check disk usage
|
|
df -h
|
|
|
|
# Check Docker disk usage
|
|
docker system df
|
|
```
|
|
|
|
#### 24.3 Backup Verification
|
|
|
|
```bash
|
|
# Test backup script
|
|
cd /opt/APP_NAME
|
|
./scripts/backup.sh --dry-run
|
|
|
|
# Run actual backup
|
|
./scripts/backup.sh
|
|
```
|
|
|
|
### Step 25: Documentation and Maintenance
|
|
|
|
#### 25.1 Update Documentation
|
|
|
|
1. **Update README.md** with deployment information
|
|
2. **Document environment variables** and their purposes
|
|
3. **Create troubleshooting guide** for common issues
|
|
4. **Document backup and restore procedures**
|
|
|
|
#### 25.2 Set Up Monitoring Alerts
|
|
|
|
```bash
|
|
# Set up monitoring cron job
|
|
(crontab -l 2>/dev/null; echo "*/5 * * * * cd /opt/APP_NAME && ./scripts/monitor.sh --type production >> /tmp/monitor.log 2>&1") | crontab -
|
|
|
|
# Check monitoring logs
|
|
tail -f /tmp/monitor.log
|
|
```
|
|
|
|
#### 25.3 Regular Maintenance Tasks
|
|
|
|
**Daily:**
|
|
- Check application logs for errors
|
|
- Monitor system resources
|
|
- Verify backup completion
|
|
|
|
**Weekly:**
|
|
- Review security logs
|
|
- Update system packages
|
|
- Test backup restoration
|
|
|
|
**Monthly:**
|
|
- Review and rotate logs
|
|
- Update SSL certificates
|
|
- Review and update documentation
|
|
|
|
---
|
|
|
|
## 🎉 Congratulations!
|
|
|
|
You have successfully set up a complete CI/CD pipeline with:
|
|
|
|
- ✅ **Automated testing** on every code push in isolated DinD environment
|
|
- ✅ **Docker image building** and Harbor registry storage
|
|
- ✅ **Automated deployment** to production
|
|
- ✅ **Health monitoring** and logging
|
|
- ✅ **Backup and cleanup** automation
|
|
- ✅ **Security hardening** with proper user separation
|
|
- ✅ **SSL/TLS support** for production (optional)
|
|
- ✅ **Zero resource contention** between CI/CD and Harbor
|
|
|
|
Your application is now ready for continuous deployment with proper security, monitoring, and maintenance procedures in place! |