Some checks are pending
CI/CD Pipeline (Fully Isolated DinD) / Run Tests (DinD) (push) Waiting to run
CI/CD Pipeline (Fully Isolated DinD) / Build and Push Docker Images (DinD) (push) Blocked by required conditions
CI/CD Pipeline (Fully Isolated DinD) / Deploy to Production (push) Blocked by required conditions
2378 lines
No EOL
85 KiB
Markdown
2378 lines
No EOL
85 KiB
Markdown
# CI/CD Pipeline Setup Guide
|
|
|
|
This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments using Docker-in-Docker (DinD) for isolated CI operations.
|
|
|
|
## Architecture Overview
|
|
|
|
```
|
|
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
|
│ Forgejo Host │ │ CI/CD Linode │ │ Production Linode│
|
|
│ (Repository) │ │ (Actions Runner)│ │ (Podman Deploy) │
|
|
│ │ │ + Podman Registry│ │ │
|
|
│ │ │ + PiP Container │ │ │
|
|
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
|
│ │ │
|
|
│ │ │
|
|
└─────────── Push ──────┼───────────────────────┘
|
|
│
|
|
└─── Deploy ────────────┘
|
|
```
|
|
|
|
## Pipeline Flow
|
|
|
|
1. **Code Push**: Developer pushes code to Forgejo repository
|
|
2. **Automated Testing**: CI/CD Linode runs tests in isolated DinD environment
|
|
3. **Image Building**: If tests pass, Docker images are built within DinD
|
|
4. **Registry Push**: Images are pushed to Docker Registry from DinD
|
|
5. **Production Deployment**: Production Linode pulls images and deploys
|
|
6. **Health Check**: Application is verified and accessible
|
|
|
|
## Key Benefits of PiP (Podman-in-Podman) Approach
|
|
|
|
### **For Rust Testing:**
|
|
- ✅ **Fresh environment** every test run
|
|
- ✅ **Parallel execution** capability
|
|
- ✅ **Isolated dependencies** - no test pollution
|
|
- ✅ **Fast cleanup** - just restart PiP container
|
|
|
|
### **For CI/CD Operations:**
|
|
- ✅ **Zero resource contention** with Podman Registry
|
|
- ✅ **Simple cleanup** - one-line container restart
|
|
- ✅ **Perfect isolation** - CI/CD can't affect Podman Registry
|
|
- ✅ **Consistent environment** - same setup every time
|
|
|
|
### **For Maintenance:**
|
|
- ✅ **Reduced complexity** - no complex cleanup scripts
|
|
- ✅ **Easy debugging** - isolated environment
|
|
- ✅ **Reliable operations** - no interference between services
|
|
|
|
## Prerequisites
|
|
|
|
- Two Ubuntu 24.04 LTS Linodes with root access
|
|
- Basic familiarity with Linux commands and SSH
|
|
- Forgejo repository with Actions enabled
|
|
|
|
## Quick Start
|
|
|
|
1. **Set up CI/CD Linode** (Steps 0-8)
|
|
2. **Set up Production Linode** (Steps 10-16)
|
|
3. **Set up Forgejo repository secrets** (Step 17)
|
|
4. **Test the complete pipeline** (Step 18)
|
|
|
|
## What's Included
|
|
|
|
### CI/CD Linode Features
|
|
- Forgejo Actions runner for automated builds
|
|
- **Podman-in-Podman (PiP) container** for isolated CI operations
|
|
- Docker Registry v2 with nginx reverse proxy for image storage
|
|
- **FHS-compliant directory structure** for data, certificates, and logs
|
|
- Unauthenticated pulls, authenticated pushes
|
|
- Automatic HTTPS with nginx
|
|
- Secure SSH communication with production
|
|
- **Simplified cleanup** - just restart PiP container
|
|
|
|
### Production Linode Features
|
|
- Podman-based application deployment
|
|
- Nginx reverse proxy with security headers
|
|
- Automated backups and monitoring
|
|
- Firewall and fail2ban protection
|
|
|
|
### Pipeline Features
|
|
- **Automated testing** on every code push in isolated environment
|
|
- **Automated image building** and registry push from PiP
|
|
- **Automated deployment** to production
|
|
- **Rollback capability** with image versioning
|
|
- **Health monitoring** and logging
|
|
- **Zero resource contention** between CI/CD and Docker Registry v2
|
|
|
|
## Security Model and User Separation
|
|
|
|
This setup uses a **principle of least privilege** approach with separate users for different purposes:
|
|
|
|
### User Roles
|
|
|
|
1. **Root User**
|
|
- **Purpose**: Initial system setup only
|
|
- **SSH Access**: Disabled after setup
|
|
- **Privileges**: Full system access (used only during initial configuration)
|
|
|
|
2. **Deployment User (`CI_DEPLOY_USER` on CI Linode, `PROD_DEPLOY_USER` on Production Linode)**
|
|
- **Purpose**: SSH access, deployment tasks, system administration
|
|
- **SSH Access**: Enabled with key-based authentication
|
|
- **Privileges**: Sudo access for deployment and administrative tasks
|
|
- **Example**: `ci-deploy` / `prod-deploy`
|
|
|
|
3. **Service Account (`CI_SERVICE_USER` on CI Linode, `PROD_SERVICE_USER` on Production Linode)**
|
|
- **Purpose**: Running application services (Docker containers, databases)
|
|
- **SSH Access**: None (no login shell)
|
|
- **Privileges**: No sudo access, minimal system access
|
|
- **Example**: `ci-service` / `prod-service`
|
|
|
|
### Security Benefits
|
|
|
|
- **No root SSH access**: Eliminates the most common attack vector
|
|
- **Principle of least privilege**: Each user has only the access they need
|
|
- **Separation of concerns**: Deployment tasks vs. service execution are separate
|
|
- **Audit trail**: Clear distinction between deployment and service activities
|
|
- **Reduced attack surface**: Service account has minimal privileges
|
|
|
|
### File Permissions
|
|
|
|
- **Application files**: Owned by `CI_SERVICE_USER` for security (CI Linode) / `PROD_SERVICE_USER` for security (Production Linode)
|
|
- **Docker operations**: Run by `CI_SERVICE_USER` with Docker group access (CI Linode) / `PROD_SERVICE_USER` with Docker group access (Production Linode)
|
|
- **Service execution**: Run by `CI_SERVICE_USER` (no sudo needed) / `PROD_SERVICE_USER` (no sudo needed)
|
|
|
|
---
|
|
|
|
## Prerequisites and Initial Setup
|
|
|
|
### What's Already Done (Assumptions)
|
|
|
|
This guide assumes you have already:
|
|
|
|
1. **Created two Ubuntu 24.04 LTS Linodes** with root access
|
|
2. **Set root passwords** for both Linodes
|
|
3. **Have SSH client** installed on your local machine
|
|
4. **Have Forgejo repository** with Actions enabled
|
|
|
|
### Step 0: Initial SSH Access and Verification
|
|
|
|
Before proceeding with the setup, you need to establish initial SSH access to both Linodes.
|
|
|
|
#### 0.1 Get Your Linode IP Addresses
|
|
|
|
From your Linode dashboard, note the IP addresses for:
|
|
- **CI/CD Linode**: `YOUR_CI_CD_IP` (IP address only, no domain needed)
|
|
- **Production Linode**: `YOUR_PRODUCTION_IP` (IP address for SSH and web access)
|
|
|
|
#### 0.2 Test Initial SSH Access
|
|
|
|
Test SSH access to both Linodes:
|
|
|
|
```bash
|
|
# Test CI/CD Linode (IP address only)
|
|
ssh root@YOUR_CI_CD_IP
|
|
|
|
# Test Production Linode (IP address only)
|
|
ssh root@YOUR_PRODUCTION_IP
|
|
```
|
|
|
|
**Expected output**: SSH login prompt asking for root password.
|
|
|
|
**If something goes wrong**:
|
|
- Verify the IP addresses are correct
|
|
- Check that SSH is enabled on the Linodes
|
|
- Ensure your local machine can reach the Linodes (no firewall blocking)
|
|
|
|
#### 0.3 Choose Your Names
|
|
|
|
Before proceeding, decide on:
|
|
|
|
1. **CI Service Account Name**: Choose a username for the CI service account (e.g., `ci-service`)
|
|
- Replace `CI_SERVICE_USER` in this guide with your chosen name
|
|
- This account runs the CI pipeline and Docker operations on the CI Linode
|
|
|
|
2. **CI Deployment User Name**: Choose a username for CI deployment tasks (e.g., `ci-deploy`)
|
|
- Replace `CI_DEPLOY_USER` in this guide with your chosen name
|
|
- This account has sudo privileges for deployment tasks
|
|
|
|
3. **Application Name**: Choose a name for your application (e.g., `sharenet`)
|
|
- Replace `APP_NAME` in this guide with your chosen name
|
|
|
|
|
|
|
|
**Example**:
|
|
- If you choose `ci-service` as CI service account, `ci-deploy` as CI deployment user, and `sharenet` as application name:
|
|
- Replace all `CI_SERVICE_USER` with `ci-service`
|
|
- Replace all `CI_DEPLOY_USER` with `ci-deploy`
|
|
- Replace all `APP_NAME` with `sharenet`
|
|
|
|
|
|
**Security Model**:
|
|
- **CI Service Account (`CI_SERVICE_USER`)**: Runs CI pipeline and Docker operations, no sudo access
|
|
- **CI Deployment User (`CI_DEPLOY_USER`)**: Handles SSH communication and orchestration, has sudo access
|
|
- **Root**: Only used for initial setup, then disabled for SSH access
|
|
|
|
#### 0.4 Set Up SSH Key Authentication for Local Development
|
|
|
|
**Important**: This step should be done on both Linodes to enable secure SSH access from your local development machine.
|
|
|
|
##### 0.4.1 Generate SSH Key on Your Local Machine
|
|
|
|
On your local development machine, generate an SSH key pair:
|
|
|
|
```bash
|
|
# Generate SSH key pair (if you don't already have one)
|
|
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""
|
|
|
|
# Or use existing key if you have one
|
|
ls ~/.ssh/id_ed25519.pub
|
|
```
|
|
|
|
##### 0.4.2 Add Your Public Key to Both Linodes
|
|
|
|
Copy your public key to both Linodes:
|
|
|
|
```bash
|
|
# Copy your public key to CI/CD Linode
|
|
ssh-copy-id root@YOUR_CI_CD_IP
|
|
|
|
# Copy your public key to Production Linode
|
|
ssh-copy-id root@YOUR_PRODUCTION_IP
|
|
```
|
|
|
|
**Alternative method** (if ssh-copy-id doesn't work):
|
|
```bash
|
|
# Copy your public key content
|
|
cat ~/.ssh/id_ed25519.pub
|
|
|
|
# Then manually add to each server
|
|
ssh root@YOUR_CI_CD_IP
|
|
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
|
|
|
|
ssh root@YOUR_PRODUCTION_IP
|
|
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
|
|
```
|
|
|
|
##### 0.4.3 Test SSH Key Authentication
|
|
|
|
Test that you can access both servers without passwords:
|
|
|
|
```bash
|
|
# Test CI/CD Linode
|
|
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'
|
|
|
|
# Test Production Linode
|
|
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'
|
|
```
|
|
|
|
**Expected output**: The echo messages should appear without password prompts.
|
|
|
|
##### 0.4.4 Create Deployment Users
|
|
|
|
On both Linodes, create the deployment user with sudo privileges:
|
|
|
|
**For CI Linode:**
|
|
```bash
|
|
# Create CI deployment user
|
|
sudo useradd -m -s /bin/bash CI_DEPLOY_USER
|
|
sudo usermod -aG sudo CI_DEPLOY_USER
|
|
|
|
# Set a secure password (for emergency access only)
|
|
echo "CI_DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
|
|
# Copy your SSH key to the CI deployment user
|
|
sudo mkdir -p /home/CI_DEPLOY_USER/.ssh
|
|
sudo cp ~/.ssh/authorized_keys /home/CI_DEPLOY_USER/.ssh/
|
|
sudo chown -R CI_DEPLOY_USER:CI_DEPLOY_USER /home/CI_DEPLOY_USER/.ssh
|
|
sudo chmod 700 /home/CI_DEPLOY_USER/.ssh
|
|
sudo chmod 600 /home/CI_DEPLOY_USER/.ssh/authorized_keys
|
|
|
|
# Configure sudo to use SSH key authentication (most secure)
|
|
echo "CI_DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/CI_DEPLOY_USER
|
|
sudo chmod 440 /etc/sudoers.d/CI_DEPLOY_USER
|
|
```
|
|
|
|
**For Production Linode:**
|
|
```bash
|
|
# Create production deployment user
|
|
sudo useradd -m -s /bin/bash PROD_DEPLOY_USER
|
|
sudo usermod -aG sudo PROD_DEPLOY_USER
|
|
|
|
# Set a secure password (for emergency access only)
|
|
echo "PROD_DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
|
|
# Copy your SSH key to the production deployment user
|
|
sudo mkdir -p /home/PROD_DEPLOY_USER/.ssh
|
|
sudo cp ~/.ssh/authorized_keys /home/PROD_DEPLOY_USER/.ssh/
|
|
sudo chown -R PROD_DEPLOY_USER:PROD_DEPLOY_USER /home/PROD_DEPLOY_USER/.ssh
|
|
sudo chmod 700 /home/PROD_DEPLOY_USER/.ssh
|
|
sudo chmod 600 /home/PROD_DEPLOY_USER/.ssh/authorized_keys
|
|
|
|
# Configure sudo to use SSH key authentication (most secure)
|
|
echo "PROD_DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/PROD_DEPLOY_USER
|
|
sudo chmod 440 /etc/sudoers.d/PROD_DEPLOY_USER
|
|
```
|
|
|
|
**Security Note**: This configuration allows the deployment users to use sudo without a password, which is more secure for CI/CD automation since there are no passwords to store or expose. The random password is set for emergency console access only.
|
|
|
|
##### 0.4.5 Test Sudo Access
|
|
|
|
Test that the deployment users can use sudo without password prompts:
|
|
|
|
```bash
|
|
# Test CI deployment user sudo access
|
|
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'sudo whoami'
|
|
|
|
# Test production deployment user sudo access
|
|
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'sudo whoami'
|
|
```
|
|
|
|
**Expected output**: Both commands should return `root` without prompting for a password.
|
|
|
|
##### 0.4.6 Test Deployment User Access
|
|
|
|
Test that you can access both servers as the deployment users:
|
|
|
|
```bash
|
|
# Test CI/CD Linode
|
|
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'echo "CI deployment user SSH access works for CI/CD"'
|
|
|
|
# Test Production Linode
|
|
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Production deployment user SSH access works for Production"'
|
|
```
|
|
|
|
**Expected output**: The echo messages should appear without password prompts.
|
|
|
|
##### 0.4.7 Create SSH Config for Easy Access
|
|
|
|
On your local machine, create an SSH config file for easy access:
|
|
|
|
```bash
|
|
# Create SSH config
|
|
cat > ~/.ssh/config << 'EOF'
|
|
Host ci-cd-dev
|
|
HostName YOUR_CI_CD_IP
|
|
User CI_DEPLOY_USER
|
|
IdentityFile ~/.ssh/id_ed25519
|
|
StrictHostKeyChecking no
|
|
|
|
Host production-dev
|
|
HostName YOUR_PRODUCTION_IP
|
|
User PROD_DEPLOY_USER
|
|
IdentityFile ~/.ssh/id_ed25519
|
|
StrictHostKeyChecking no
|
|
EOF
|
|
|
|
chmod 600 ~/.ssh/config
|
|
```
|
|
|
|
Now you can access servers easily:
|
|
```bash
|
|
ssh ci-cd-dev
|
|
ssh production-dev
|
|
```
|
|
|
|
##### 0.4.8 Secure SSH Configuration
|
|
|
|
**Critical Security Step**: After setting up SSH key authentication, you must disable password authentication and root login to secure your servers.
|
|
|
|
**For Both CI/CD and Production Linodes:**
|
|
|
|
**Step 1: Edit SSH Configuration File**
|
|
|
|
```bash
|
|
# Open the SSH configuration file using nano
|
|
sudo nano /etc/ssh/sshd_config
|
|
```
|
|
|
|
**Step 2: Disallow Root Logins**
|
|
|
|
Find the line that says:
|
|
```
|
|
#PermitRootLogin prohibit-password
|
|
```
|
|
|
|
Change it to:
|
|
```
|
|
PermitRootLogin no
|
|
```
|
|
|
|
**Step 3: Disable Password Authentication**
|
|
|
|
Find the line that says:
|
|
```
|
|
#PasswordAuthentication yes
|
|
```
|
|
|
|
Change it to:
|
|
```
|
|
PasswordAuthentication no
|
|
```
|
|
|
|
**Step 4: Configure Protocol Family (Optional)**
|
|
|
|
If you only need IPv4 connections, find or add:
|
|
```
|
|
#AddressFamily any
|
|
```
|
|
|
|
Change it to:
|
|
```
|
|
AddressFamily inet
|
|
```
|
|
|
|
**Step 5: Save and Exit**
|
|
|
|
- Press `Ctrl + X` to exit
|
|
- Press `Y` to confirm saving
|
|
- Press `Enter` to confirm the filename
|
|
|
|
**Step 6: Test SSH Configuration**
|
|
|
|
```bash
|
|
# Test the SSH configuration for syntax errors
|
|
sudo sshd -t
|
|
```
|
|
|
|
**Step 7: Restart SSH Service**
|
|
|
|
For Ubuntu 22.10+ (socket-based activation):
|
|
```bash
|
|
sudo systemctl enable --now ssh.service
|
|
```
|
|
|
|
For other distributions:
|
|
```bash
|
|
sudo systemctl restart sshd
|
|
```
|
|
|
|
**Step 8: Verify SSH Access**
|
|
|
|
**IMPORTANT**: Test SSH access from a new terminal window before closing your current session:
|
|
|
|
```bash
|
|
# Test CI/CD Linode
|
|
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'echo "SSH configuration test successful"'
|
|
|
|
# Test Production Linode
|
|
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "SSH configuration test successful"'
|
|
```
|
|
|
|
**What these changes do:**
|
|
|
|
- **`PermitRootLogin no`**: Completely disables root SSH access
|
|
- **`PasswordAuthentication no`**: Disables password-based authentication
|
|
- **`AddressFamily inet`**: Listens only on IPv4 (optional, for additional security)
|
|
|
|
**Security Benefits:**
|
|
|
|
- **No root access**: Eliminates the most common attack vector
|
|
- **Key-only authentication**: Prevents brute force password attacks
|
|
- **Protocol restriction**: Limits SSH to IPv4 only (if configured)
|
|
|
|
**Emergency Access:**
|
|
|
|
If you lose SSH access, you can still access the server through:
|
|
- **Linode Console**: Use the Linode dashboard's console access
|
|
- **Emergency mode**: Boot into single-user mode if needed
|
|
|
|
**Verification Commands:**
|
|
|
|
```bash
|
|
# Check SSH configuration
|
|
sudo grep -E "(PermitRootLogin|PasswordAuthentication|AddressFamily)" /etc/ssh/sshd_config
|
|
|
|
# Check SSH service status
|
|
sudo systemctl status ssh
|
|
|
|
# Check SSH logs for any issues
|
|
sudo journalctl -u ssh -f
|
|
|
|
# Test SSH access from a new session
|
|
ssh CI_DEPLOY_USER@YOUR_CI_CD_IP 'whoami'
|
|
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'whoami'
|
|
```
|
|
|
|
**Expected Output:**
|
|
- `PermitRootLogin no`
|
|
- `PasswordAuthentication no`
|
|
- `AddressFamily inet` (if configured)
|
|
- SSH service should be "active (running)"
|
|
- Test commands should return the deployment user names
|
|
|
|
**Important Security Notes:**
|
|
|
|
1. **Test before closing**: Always test SSH access from a new session before closing your current SSH connection
|
|
2. **Keep backup**: You can restore the original configuration if needed
|
|
3. **Monitor logs**: Check `/var/log/auth.log` for SSH activity and potential attacks
|
|
4. **Regular updates**: Keep SSH and system packages updated for security patches
|
|
|
|
**Alternative: Manual Configuration with Backup**
|
|
|
|
If you prefer to manually edit the file with a backup:
|
|
|
|
```bash
|
|
# Create backup
|
|
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup
|
|
|
|
# Edit the file
|
|
sudo nano /etc/ssh/sshd_config
|
|
|
|
# Test configuration
|
|
sudo sshd -t
|
|
|
|
# Restart service
|
|
sudo systemctl restart ssh
|
|
```
|
|
|
|
---
|
|
|
|
## Part 1: CI/CD Linode Setup
|
|
|
|
### Step 1: Initial System Setup
|
|
|
|
#### 1.1 Update the System
|
|
|
|
```bash
|
|
sudo apt update && sudo apt upgrade -y
|
|
```
|
|
|
|
**What this does**: Updates package lists and upgrades all installed packages to their latest versions.
|
|
|
|
**Expected output**: A list of packages being updated, followed by completion messages.
|
|
|
|
#### 1.2 Configure Timezone
|
|
|
|
```bash
|
|
# Configure timezone interactively
|
|
sudo dpkg-reconfigure tzdata
|
|
|
|
# Verify timezone setting
|
|
date
|
|
```
|
|
|
|
**What this does**: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
|
|
|
|
**Expected output**: After selecting your timezone, the `date` command should show the current date and time in your selected timezone.
|
|
|
|
#### 1.3 Configure /etc/hosts
|
|
|
|
```bash
|
|
# Add localhost entries for both IPv4 and IPv6
|
|
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
|
|
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
|
|
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
|
|
# Verify the configuration
|
|
cat /etc/hosts
|
|
```
|
|
|
|
**What this does**:
|
|
- Adds localhost entries for both IPv4 and IPv6 addresses to `/etc/hosts`
|
|
- Ensures proper localhost resolution for both IPv4 and IPv6
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IPV4_ADDRESS` and `YOUR_CI_CD_IPV6_ADDRESS` with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.
|
|
|
|
**Expected output**: The `/etc/hosts` file should show entries for `127.0.0.1`, `::1`, and your Linode's actual IP addresses all mapping to `localhost`.
|
|
|
|
#### 1.4 Install Essential Packages
|
|
|
|
```bash
|
|
sudo apt install -y \
|
|
curl \
|
|
wget \
|
|
git \
|
|
build-essential \
|
|
pkg-config \
|
|
libssl-dev \
|
|
ca-certificates \
|
|
apt-transport-https \
|
|
software-properties-common \
|
|
apache2-utils
|
|
```
|
|
|
|
#### 1.5 Install Podman
|
|
|
|
```bash
|
|
# Install Podman and related tools
|
|
sudo apt install -y podman
|
|
|
|
# Verify installation
|
|
podman --version
|
|
|
|
# Configure Podman for rootless operation (optional but recommended)
|
|
echo 'kernel.unprivileged_userns_clone=1' | sudo tee -a /etc/sysctl.conf
|
|
sudo sysctl -p
|
|
|
|
# Configure subuid/subgid for CI_SERVICE_USER
|
|
sudo usermod --add-subuids 100000-165535 CI_SERVICE_USER
|
|
sudo usermod --add-subgids 100000-165535 CI_SERVICE_USER
|
|
```
|
|
|
|
**What this does**: Installs Podman and configures it for rootless operation, which is needed for the CI pipeline and Docker Registry operations.
|
|
|
|
### Step 2: Create Users
|
|
|
|
#### 2.1 Create CI Service Account
|
|
|
|
```bash
|
|
# Create dedicated group for the CI service account
|
|
sudo groupadd -r CI_SERVICE_USER
|
|
|
|
# Create CI service account user with dedicated group
|
|
sudo useradd -r -g CI_SERVICE_USER -s /bin/bash -m -d /home/CI_SERVICE_USER CI_SERVICE_USER
|
|
echo "CI_SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
```
|
|
|
|
#### 2.2 Verify Users
|
|
|
|
```bash
|
|
sudo su - CI_SERVICE_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
|
|
sudo su - CI_DEPLOY_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
```
|
|
|
|
### Step 3: Clone Repository for Registry Configuration
|
|
|
|
#### 3.1 Clone Repository
|
|
|
|
```bash
|
|
# Switch to CI_DEPLOY_USER (who has sudo access)
|
|
sudo su - CI_DEPLOY_USER
|
|
|
|
# Create application directory and clone repository
|
|
sudo mkdir -p /opt/APP_NAME
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/APP_NAME
|
|
cd /opt
|
|
sudo git clone https://your-forgejo-instance/your-username/APP_NAME.git
|
|
sudo chown -R CI_SERVICE_USER:CI_SERVICE_USER APP_NAME/
|
|
|
|
# Verify the registry folder exists
|
|
ls -la /opt/APP_NAME/registry/
|
|
```
|
|
|
|
**Important**: Replace `your-forgejo-instance`, `your-username`, and `APP_NAME` with your actual Forgejo instance URL, username, and application name.
|
|
|
|
**What this does**:
|
|
- CI_DEPLOY_USER creates the directory structure and clones the repository
|
|
- CI_SERVICE_USER owns all the files for security
|
|
- Registry configuration files are now available at `/opt/APP_NAME/registry/`
|
|
|
|
### Step 4: Set Up Docker Registry v2 with nginx
|
|
|
|
We'll set up a basic Docker Registry v2 with nginx as a reverse proxy, configured to allow unauthenticated pulls but require authentication for pushes.
|
|
|
|
#### 4.1 Configure FHS-Compliant Registry Directories
|
|
|
|
```bash
|
|
# Create FHS-compliant directories for registry data and certificates
|
|
sudo mkdir -p /var/lib/registry
|
|
sudo mkdir -p /etc/registry/certs
|
|
sudo mkdir -p /var/log/registry
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /var/lib/registry
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /etc/registry/certs
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /var/log/registry
|
|
sudo chmod 755 /var/lib/registry
|
|
sudo chmod 755 /etc/registry/certs
|
|
sudo chmod 755 /var/log/registry
|
|
```
|
|
|
|
#### 4.2 Create Docker Registry v2 Pod Setup
|
|
|
|
```bash
|
|
# Navigate to the cloned application directory
|
|
cd /opt/APP_NAME/registry
|
|
|
|
# Update openssl.conf with your actual IP address and registry name
|
|
sudo sed -i "s/YOUR_CI_CD_IP/YOUR_ACTUAL_IP_ADDRESS/g" /opt/APP_NAME/registry/openssl.conf
|
|
sudo sed -i "s/YOUR_REGISTRY_NAME/APP_NAME-Registry/g" /opt/APP_NAME/registry/openssl.conf
|
|
```
|
|
|
|
|
|
|
|
#### 4.2.1 Security Features Applied
|
|
|
|
The Docker Registry v2 setup includes comprehensive security hardening:
|
|
|
|
**Container Security:**
|
|
- ✅ Rootless containers (runAsUser=1000, runAsGroup=1000)
|
|
- ✅ All Linux capabilities dropped
|
|
- ✅ Privilege escalation disabled
|
|
- ✅ Read-only root filesystem with tmpfs for /tmp
|
|
- ✅ Image deletion disabled (REGISTRY_STORAGE_DELETE_ENABLED=false)
|
|
|
|
**Network Security:**
|
|
- ✅ TLS 1.2/1.3 only with modern ciphers
|
|
- ✅ HSTS headers enabled
|
|
- ✅ Rate limiting (10r/s for reads, 5r/s for writes)
|
|
- ✅ Client max body size limits (2GB)
|
|
- ✅ Registry listens only internally (no host-published port)
|
|
|
|
**Resource Limits:**
|
|
- ✅ CPU limits: 1000m for registry, 500m for nginx
|
|
- ✅ Memory limits: 1Gi for registry, 512Mi for nginx
|
|
- ✅ File descriptor limits via ulimits
|
|
|
|
**Authentication & Authorization:**
|
|
- ✅ Basic auth with htpasswd for write operations
|
|
- ✅ Container policy enforcement via containers-policy.json
|
|
- ✅ Volume mounts with read-only where possible
|
|
|
|
#### 4.2.2 Set Up Authentication and Permissions
|
|
|
|
```bash
|
|
# Create FHS-compliant authentication directory
|
|
sudo mkdir -p /etc/registry/auth
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /etc/registry/auth
|
|
sudo chmod 755 /etc/registry/auth
|
|
|
|
# Create htpasswd file for nginx authentication
|
|
# Save this password somewhere safe
|
|
REGISTRY_PASSWORD="your-secure-registry-password"
|
|
|
|
# Create htpasswd file in FHS-compliant location
|
|
sudo htpasswd -cb /etc/registry/auth/.htpasswd registry-user "$REGISTRY_PASSWORD"
|
|
|
|
# Set secure permissions on htpasswd file (owner read/write only)
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /etc/registry/auth/.htpasswd
|
|
sudo chmod 600 /etc/registry/auth/.htpasswd
|
|
|
|
# Set proper permissions for configuration files
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/APP_NAME/registry/nginx.conf
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/APP_NAME/registry/openssl.conf
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/APP_NAME/registry/registry-pod.yaml
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /opt/APP_NAME/registry/containers-policy.json
|
|
sudo chmod 644 /opt/APP_NAME/registry/nginx.conf
|
|
sudo chmod 644 /opt/APP_NAME/registry/openssl.conf
|
|
sudo chmod 644 /opt/APP_NAME/registry/registry-pod.yaml
|
|
sudo chmod 644 /opt/APP_NAME/registry/containers-policy.json
|
|
```
|
|
|
|
#### 4.2.3 Create FHS-Compliant Directory Structure
|
|
|
|
```bash
|
|
# Create FHS-compliant certificate directory structure
|
|
sudo mkdir -p /etc/registry/certs/private
|
|
sudo mkdir -p /etc/registry/certs/requests
|
|
sudo mkdir -p /etc/registry/certs/ca
|
|
sudo mkdir -p /var/lib/registry/data
|
|
|
|
# Set proper ownership for registry data directory
|
|
sudo chown -R CI_SERVICE_USER:CI_SERVICE_USER /var/lib/registry/data
|
|
|
|
# Set proper ownership and permissions for certificate subdirectories
|
|
sudo chown -R CI_SERVICE_USER:CI_SERVICE_USER /etc/registry/certs/private
|
|
sudo chown -R CI_SERVICE_USER:CI_SERVICE_USER /etc/registry/certs/requests
|
|
sudo chown -R CI_SERVICE_USER:CI_SERVICE_USER /etc/registry/certs/ca
|
|
sudo chmod 700 /etc/registry/certs/private # Private keys - restricted access
|
|
sudo chmod 755 /etc/registry/certs/requests # Certificate requests
|
|
sudo chmod 755 /etc/registry/certs/ca # CA certificates
|
|
sudo chmod 755 /var/lib/registry/data # Registry data
|
|
|
|
|
|
```
|
|
|
|
#### 4.2.4 Generate TLS Certificate and Install for Podman
|
|
|
|
**Generate Self-Signed Certificate:**
|
|
|
|
```bash
|
|
# 1. Generate self-signed certificate with proper CA chain using FHS-compliant structure
|
|
cd /etc/registry/certs
|
|
|
|
# Generate CA private key in private subdirectory
|
|
sudo -u CI_SERVICE_USER openssl genrsa -out private/ca.key 4096
|
|
|
|
# Generate CA certificate in ca subdirectory
|
|
sudo -u CI_SERVICE_USER openssl req -new -x509 -key private/ca.key \
|
|
-out ca/ca.crt \
|
|
-days 365 \
|
|
-subj "/O=YOUR_ORGANIZATION/CN=APP_NAME-Registry-CA"
|
|
|
|
# Generate server private key in private subdirectory
|
|
sudo -u CI_SERVICE_USER openssl genrsa -out private/registry.key 4096
|
|
|
|
# Copy and use the project's OpenSSL configuration file
|
|
sudo cp /opt/APP_NAME/registry/openssl.conf /etc/registry/certs/requests/
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /etc/registry/certs/requests/openssl.conf
|
|
|
|
# Generate server certificate signing request in requests subdirectory
|
|
sudo -u CI_SERVICE_USER openssl req -new -key private/registry.key \
|
|
-out requests/registry.csr \
|
|
-config requests/openssl.conf
|
|
|
|
# Sign server certificate with CA
|
|
sudo -u CI_SERVICE_USER openssl x509 -req -in requests/registry.csr \
|
|
-CA ca/ca.crt -CAkey private/ca.key -CAcreateserial \
|
|
-out registry.crt \
|
|
-days 365 \
|
|
-extensions req_ext \
|
|
-extfile requests/openssl.conf
|
|
|
|
# Set proper FHS-compliant permissions
|
|
sudo chmod 600 private/ca.key private/registry.key # Private keys - owner read/write only
|
|
sudo chmod 644 ca/ca.crt registry.crt # Certificates - world readable
|
|
sudo chmod 644 requests/registry.csr requests/openssl.conf # Requests - world readable
|
|
|
|
# Verify certificate creation
|
|
sudo -u CI_SERVICE_USER openssl x509 -in /etc/registry/certs/registry.crt -text -noout | grep -E "(Subject:|DNS:|IP Address:)"
|
|
|
|
# Create required directories for containers
|
|
sudo mkdir -p /var/log/nginx
|
|
sudo mkdir -p /tmp/registry-tmp /tmp/nginx-tmp
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /var/log/nginx /tmp/registry-tmp /tmp/nginx-tmp
|
|
sudo chmod 755 /var/log/nginx /tmp/registry-tmp /tmp/nginx-tmp
|
|
|
|
# 2. Install CA certificate in system trust store (for curl, wget, etc.)
|
|
sudo cp /etc/registry/certs/ca/ca.crt /usr/local/share/ca-certificates/registry-ca.crt
|
|
|
|
# This step should complete with "1 added, 0 removed". If it does not, there could be a problem with the certificate you generated, or you might already have a certificate in the trust store
|
|
sudo update-ca-certificates
|
|
```
|
|
|
|
---
|
|
|
|
**Note:**
|
|
- Replace `YOUR_ACTUAL_IP_ADDRESS` with your server's IP address.
|
|
|
|
---
|
|
|
|
#### 4.5 Set Up Systemd Service for Docker Registry v2
|
|
|
|
```bash
|
|
# Create system-wide Podman configuration
|
|
sudo mkdir -p /etc/containers
|
|
sudo tee /etc/containers/registries.conf > /dev/null << 'EOF'
|
|
unqualified-search-registries = ["docker.io"]
|
|
EOF
|
|
|
|
# Set proper permissions for system-wide Podman config
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /etc/containers/registries.conf
|
|
sudo chmod 644 /etc/containers/registries.conf
|
|
|
|
# Enable lingering for CI_SERVICE_USER to allow systemd services
|
|
sudo loginctl enable-linger CI_SERVICE_USER
|
|
|
|
# Create Podman rootless directories outside home
|
|
sudo mkdir -p /var/tmp/podman-$(id -u CI_SERVICE_USER)/{root,run,tmp,xdg-data,xdg-config}
|
|
sudo chown -R CI_SERVICE_USER:CI_SERVICE_USER /var/tmp/podman-$(id -u CI_SERVICE_USER)
|
|
sudo chmod 755 /var/tmp/podman-$(id -u CI_SERVICE_USER)
|
|
|
|
# Create runtime directory for user
|
|
sudo mkdir -p /run/user/$(id -u CI_SERVICE_USER)/podman-run
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /run/user/$(id -u CI_SERVICE_USER)/podman-run
|
|
sudo chmod 755 /run/user/$(id -u CI_SERVICE_USER)/podman-run
|
|
|
|
# Initialize Podman with rootless configuration (no home directory access)
|
|
sudo su - CI_SERVICE_USER -c "env PODMAN_ROOT=/var/tmp/podman-\$(id -u)/root PODMAN_RUNROOT=/run/user/\$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-\$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-\$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-\$(id -u)/xdg-config podman system migrate"
|
|
sudo su - CI_SERVICE_USER -c "env PODMAN_ROOT=/var/tmp/podman-\$(id -u)/root PODMAN_RUNROOT=/run/user/\$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-\$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-\$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-\$(id -u)/xdg-config podman info"
|
|
|
|
# Install systemd service and configuration files from repository
|
|
sudo cp /opt/APP_NAME/registry/docker-registry.service /etc/systemd/system/docker-registry.service
|
|
sudo cp /opt/APP_NAME/registry/containers-policy.json /etc/containers/policy.json
|
|
|
|
# Set proper permissions for policy file
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /etc/containers/policy.json
|
|
sudo chmod 644 /etc/containers/policy.json
|
|
|
|
# Replace APP_NAME placeholder with actual application name
|
|
sudo sed -i "s/APP_NAME/YOUR_ACTUAL_APP_NAME/g" /etc/systemd/system/docker-registry.service
|
|
|
|
# Replace CI_SERVICE_USER placeholder with actual CI service user name
|
|
sudo sed -i "s/CI_SERVICE_USER/YOUR_ACTUAL_CI_SERVICE_USER/g" /etc/systemd/system/docker-registry.service
|
|
|
|
# Note: The service is configured to use rootless Podman with all state outside home directory
|
|
# - PODMAN_ROOT=/var/tmp/podman-%u/root
|
|
# - PODMAN_RUNROOT=/run/user/%u/podman-run
|
|
# - PODMAN_TMPDIR=/var/tmp/podman-%u/tmp
|
|
# - XDG_DATA_HOME=/var/tmp/podman-%u/xdg-data
|
|
# - XDG_CONFIG_HOME=/var/tmp/podman-%u/xdg-config
|
|
|
|
# Configure firewall for Docker Registry v2 ports
|
|
sudo ufw allow 443/tcp # Docker Registry via nginx (public read access)
|
|
sudo ufw allow 4443/tcp # Docker Registry via nginx (authenticated operations)
|
|
|
|
# Enable and start Docker Registry v2 service
|
|
sudo systemctl daemon-reload
|
|
sudo systemctl enable docker-registry.service
|
|
sudo systemctl start docker-registry.service
|
|
|
|
# Verify services are running
|
|
sudo systemctl status docker-registry.service
|
|
|
|
# Check service logs for any issues
|
|
sudo journalctl -u docker-registry.service -f --no-pager -n 50
|
|
|
|
# Verify Podman is using non-home paths
|
|
sudo su - CI_SERVICE_USER -c "env PODMAN_ROOT=/var/tmp/podman-\$(id -u)/root PODMAN_RUNROOT=/run/user/\$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-\$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-\$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-\$(id -u)/xdg-config podman info --format '{{.Store.GraphRoot}} {{.Store.RunRoot}}'"
|
|
```
|
|
|
|
#### 4.6 Verify Docker Registry v2 Service
|
|
|
|
```bash
|
|
# Check that the service is running properly
|
|
sudo systemctl status docker-registry.service
|
|
|
|
# Check that pods are running
|
|
sudo su - CI_SERVICE_USER -c "env PODMAN_ROOT=/var/tmp/podman-\$(id -u)/root PODMAN_RUNROOT=/run/user/\$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-\$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-\$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-\$(id -u)/xdg-config podman pod ps"
|
|
|
|
# Check nginx logs
|
|
sudo su - CI_SERVICE_USER -c "env PODMAN_ROOT=/var/tmp/podman-\$(id -u)/root PODMAN_RUNROOT=/run/user/\$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-\$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-\$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-\$(id -u)/xdg-config podman logs registry-pod-nginx"
|
|
|
|
# Check Registry logs
|
|
sudo su - CI_SERVICE_USER -c "env PODMAN_ROOT=/var/tmp/podman-\$(id -u)/root PODMAN_RUNROOT=/run/user/\$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-\$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-\$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-\$(id -u)/xdg-config podman logs registry-pod-registry"
|
|
```
|
|
|
|
#### 4.7 Test Registry Setup
|
|
|
|
**For Option A (Self-signed certificates):**
|
|
|
|
```bash
|
|
# Switch to CI_SERVICE_USER for testing (CI_SERVICE_USER runs CI pipeline and Podman operations)
|
|
sudo su - CI_SERVICE_USER
|
|
|
|
# Navigate to the application directory
|
|
cd /opt/APP_NAME
|
|
|
|
# Test authenticated push using the project's registry configuration (port 4443)
|
|
echo "your-secure-registry-password" | env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman login YOUR_ACTUAL_IP_ADDRESS:4443 -u registry-user --password-stdin
|
|
|
|
# Create and push test image to authenticated endpoint
|
|
echo "FROM alpine:latest" > /tmp/test.Dockerfile
|
|
env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman build -f /tmp/test.Dockerfile -t YOUR_ACTUAL_IP_ADDRESS:4443/APP_NAME/test:latest /tmp
|
|
env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman push YOUR_ACTUAL_IP_ADDRESS:4443/APP_NAME/test:latest
|
|
|
|
# Test unauthenticated pull from standard HTTPS endpoint (port 443)
|
|
env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman logout YOUR_ACTUAL_IP_ADDRESS:4443
|
|
env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman pull YOUR_ACTUAL_IP_ADDRESS/APP_NAME/test:latest
|
|
|
|
# Test that unauthorized push to authenticated endpoint is blocked
|
|
echo "FROM alpine:latest" > /tmp/unauthorized.Dockerfile
|
|
env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman build -f /tmp/unauthorized.Dockerfile -t YOUR_ACTUAL_IP_ADDRESS:4443/APP_NAME/unauthorized:latest /tmp
|
|
env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman push YOUR_ACTUAL_IP_ADDRESS:4443/APP_NAME/unauthorized:latest
|
|
# Expected: This should fail with authentication error
|
|
|
|
# Clean up
|
|
env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman rmi YOUR_ACTUAL_IP_ADDRESS:4443/APP_NAME/test:latest 2>/dev/null || true
|
|
env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman rmi YOUR_ACTUAL_IP_ADDRESS/APP_NAME/test:latest 2>/dev/null || true
|
|
env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman rmi YOUR_ACTUAL_IP_ADDRESS:4443/APP_NAME/unauthorized:latest 2>/dev/null || true
|
|
exit
|
|
```
|
|
|
|
|
|
|
|
**Expected behavior**:
|
|
- ✅ Push requires authentication with `registry-user` credentials on port 4443
|
|
- ✅ Pull works without authentication (public read access) on port 443
|
|
- ✅ Unauthorized push is blocked on authenticated endpoint
|
|
- ✅ Registry accessible at `https://YOUR_ACTUAL_IP_ADDRESS:4443` for authenticated operations
|
|
- ✅ Registry accessible at `https://YOUR_ACTUAL_IP_ADDRESS` for unauthenticated pulls
|
|
- ✅ Proper FHS-compliant certificate structure with secure permissions
|
|
|
|
**Troubleshooting TLS Errors:**
|
|
|
|
If you get a TLS error like `remote error: tls: internal error` when using self-signed certificates, verify the certificate installation and Docker configuration:
|
|
|
|
```bash
|
|
# Verify the certificate was installed correctly in system trust store
|
|
ls -la /usr/local/share/ca-certificates/registry-ca.crt
|
|
|
|
# Verify certificate chain is valid
|
|
openssl verify -CAfile /etc/registry/certs/ca/ca.crt /etc/registry/certs/registry.crt
|
|
|
|
# Test the certificate connection
|
|
openssl s_client -connect YOUR_ACTUAL_IP_ADDRESS:4443 -servername YOUR_ACTUAL_IP_ADDRESS < /dev/null
|
|
|
|
# Verify nginx is using the correct certificates
|
|
sudo su - CI_SERVICE_USER -c "env PODMAN_ROOT=/var/tmp/podman-\$(id -u)/root PODMAN_RUNROOT=/run/user/\$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-\$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-\$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-\$(id -u)/xdg-config podman exec registry-pod-nginx ls -la /etc/registry/certs/"
|
|
|
|
# If issues persist, restart the Docker Registry v2 service to reload certificates
|
|
sudo systemctl restart docker-registry.service
|
|
|
|
# Wait for Docker Registry to restart, then test again
|
|
sleep 10
|
|
|
|
# Test Podman login to authenticated endpoint
|
|
echo "your-secure-registry-password" | env PODMAN_ROOT=/var/tmp/podman-$(id -u)/root PODMAN_RUNROOT=/run/user/$(id -u)/podman-run PODMAN_TMPDIR=/var/tmp/podman-$(id -u)/tmp XDG_DATA_HOME=/var/tmp/podman-$(id -u)/xdg-data XDG_CONFIG_HOME=/var/tmp/podman-$(id -u)/xdg-config podman login YOUR_ACTUAL_IP_ADDRESS:4443 -u registry-user --password-stdin
|
|
```
|
|
|
|
**Certificate Structure Summary:**
|
|
|
|
The project uses a two-port configuration:
|
|
- **Port 443**: Unauthenticated pulls (public read access)
|
|
- **Port 4443**: Authenticated pushes (registry-user credentials required)
|
|
|
|
**FHS-Compliant Certificate Locations:**
|
|
- **Private Keys**: `/etc/registry/certs/private/` (mode 600)
|
|
- **CA Certificates**: `/etc/registry/certs/ca/` (mode 644)
|
|
- **Certificate Requests**: `/etc/registry/certs/requests/` (mode 644)
|
|
- **Server Certificates**: `/etc/registry/certs/` (mode 644)
|
|
- **System Trust Store**: `/usr/local/share/ca-certificates/registry-ca.crt`
|
|
|
|
### Step 5: Install Forgejo Actions Runner
|
|
|
|
#### 5.1 Download Runner
|
|
|
|
**Important**: Run this step as the **CI_DEPLOY_USER** (not root or CI_SERVICE_USER). The CI_DEPLOY_USER handles deployment tasks including downloading and installing the Forgejo runner.
|
|
|
|
```bash
|
|
cd ~
|
|
|
|
# Get the latest version dynamically
|
|
LATEST_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases | jq -r '.[0].tag_name')
|
|
echo "Downloading Forgejo runner version: $LATEST_VERSION"
|
|
|
|
# Download the latest runner
|
|
wget https://code.forgejo.org/forgejo/runner/releases/download/${LATEST_VERSION}/forgejo-runner-${LATEST_VERSION#v}-linux-amd64
|
|
chmod +x forgejo-runner-${LATEST_VERSION#v}-linux-amd64
|
|
sudo mv forgejo-runner-${LATEST_VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
|
|
```
|
|
|
|
**Alternative: Pin to Specific Version (Recommended for Production)**
|
|
|
|
If you prefer to pin to a specific version for stability, replace the dynamic download with:
|
|
|
|
```bash
|
|
cd ~
|
|
VERSION="v6.3.1" # Pin to specific version
|
|
wget https://code.forgejo.org/forgejo/runner/releases/download/${VERSION}/forgejo-runner-${VERSION#v}-linux-amd64
|
|
chmod +x forgejo-runner-${VERSION#v}-linux-amd64
|
|
sudo mv forgejo-runner-${VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
|
|
```
|
|
|
|
**What this does**:
|
|
- **Dynamic approach**: Downloads the latest stable Forgejo Actions runner
|
|
- **Version pinning**: Allows you to specify a known-good version for production
|
|
- **System installation**: Installs the binary system-wide in `/usr/bin/` for proper Linux structure
|
|
- **Makes the binary executable** and available system-wide
|
|
|
|
**Production Recommendation**: Use version pinning in production environments to ensure consistency and avoid unexpected breaking changes.
|
|
|
|
#### 5.2 Register Runner
|
|
|
|
**Important**: The runner must be registered with your Forgejo instance before it can start. This creates the required `.runner` configuration file.
|
|
|
|
**Step 1: Get Permissions to Create Repository-level Runners**
|
|
|
|
To create a repository-level runner, you need **Repository Admin** or **Owner** permissions. Here's how to check and manage permissions:
|
|
|
|
**Check Your Current Permissions:**
|
|
1. Go to your repository: `https://your-forgejo-instance/your-username/your-repo`
|
|
2. Look for the **Settings** tab in the repository navigation
|
|
3. If you see **Actions** in the left sidebar under Settings, you have the right permissions
|
|
4. If you don't see Settings or Actions, you don't have admin access
|
|
|
|
**Add Repository Admin (Repository Owner Only):**
|
|
|
|
If you're the repository owner and need to give someone else admin access:
|
|
|
|
1. **Go to Repository Settings:**
|
|
- Navigate to your repository
|
|
- Click **Settings** tab
|
|
- Click **Collaborators** in the left sidebar
|
|
|
|
2. **Add Collaborator:**
|
|
- Click **Add Collaborator** button
|
|
- Enter the username or email of the person you want to add
|
|
- Select **Admin** from the role dropdown
|
|
- Click **Add Collaborator**
|
|
|
|
3. **Alternative: Manage Team Access (for Organizations):**
|
|
- Go to **Settings → Collaborators**
|
|
- Click **Manage Team Access**
|
|
- Add the team with **Admin** permissions
|
|
|
|
**Repository Roles and Permissions:**
|
|
|
|
| Role | Can Create Runners | Can Manage Repository | Can Push Code |
|
|
|------|-------------------|----------------------|---------------|
|
|
| **Owner** | ✅ Yes | ✅ Yes | ✅ Yes |
|
|
| **Admin** | ✅ Yes | ✅ Yes | ✅ Yes |
|
|
| **Write** | ❌ No | ❌ No | ✅ Yes |
|
|
| **Read** | ❌ No | ❌ No | ❌ No |
|
|
|
|
**If You Don't Have Permissions:**
|
|
|
|
**Option 1: Ask Repository Owner**
|
|
- Contact the person who owns the repository
|
|
- Ask them to create the runner and share the registration token with you
|
|
|
|
**Option 2: Use Organization/User Runner**
|
|
- If you have access to organization settings, create an org-level runner
|
|
- Or create a user-level runner if you own other repositories
|
|
|
|
**Option 3: Site Admin Help**
|
|
- Contact your Forgejo instance administrator to create a site-level runner
|
|
|
|
**Site Administrator: Setting Repository Admin (Forgejo Instance Admin)**
|
|
|
|
To add an existing user as an Administrator of an existing repository in Forgejo, follow these steps:
|
|
|
|
1. **Go to the repository**: Navigate to the main page of the repository you want to manage.
|
|
2. **Access repository settings**: Click on the "Settings" tab under your repository name.
|
|
3. **Go to Collaborators & teams**: In the sidebar, under the "Access" section, click on "Collaborators & teams".
|
|
4. **Manage access**: Under "Manage access", locate the existing user you want to make an administrator.
|
|
5. **Change their role**: Next to the user's name, select the "Role" dropdown menu and click on "Administrator".
|
|
|
|
**Important Note**: If the user is already the Owner of the repository, then they do not have to add themselves as an Administrator of the repository and indeed cannot. Repository owners automatically have all administrative permissions.
|
|
|
|
**Important Notes for Site Administrators:**
|
|
- **Repository Admin** can manage the repository but cannot modify site-wide settings
|
|
- **Site Admin** retains full control over the Forgejo instance
|
|
- Changes take effect immediately for the user
|
|
- Consider the security implications of granting admin access
|
|
|
|
**Step 2: Get Registration Token**
|
|
1. Go to your Forgejo repository
|
|
2. Navigate to **Settings → Actions → Runners**
|
|
3. Click **"New runner"**
|
|
4. Copy the registration token
|
|
|
|
**Step 3: Register the Runner**
|
|
|
|
```bash
|
|
# Switch to CI_DEPLOY_USER to register the runner
|
|
sudo su - CI_DEPLOY_USER
|
|
|
|
cd ~
|
|
|
|
# Register the runner with your Forgejo instance
|
|
forgejo-runner register \
|
|
--instance https://your-forgejo-instance \
|
|
--token YOUR_REGISTRATION_TOKEN \
|
|
--name "ci-runner" \
|
|
--labels "ci" \
|
|
--no-interactive
|
|
```
|
|
|
|
**Important**: Replace `your-forgejo-instance` with your actual Forgejo instance URL and `YOUR_REGISTRATION_TOKEN` with the token you copied from Step 2. Also make sure it ends in a `/`.
|
|
|
|
**Note**: The `your-forgejo-instance` should be the **base URL** of your Forgejo instance (e.g., `https://git.<your-domain>/`), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.
|
|
|
|
**What this does**:
|
|
- Creates the required `.runner` configuration file in the CI_DEPLOY_USER's home directory
|
|
- Registers the runner with your Forgejo instance
|
|
- Sets up the runner with appropriate labels for Ubuntu and Docker environments
|
|
|
|
**Step 4: Set Up System Configuration**
|
|
|
|
```bash
|
|
# Create system config directory for Forgejo runner
|
|
sudo mkdir -p /etc/forgejo-runner
|
|
|
|
# Copy the runner configuration to system location
|
|
sudo mv /home/CI_DEPLOY_USER/.runner /etc/forgejo-runner/.runner
|
|
|
|
# Set proper ownership and permissions
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /etc/forgejo-runner/.runner
|
|
sudo chmod 600 /etc/forgejo-runner/.runner
|
|
```
|
|
|
|
**What this does**:
|
|
- Copies the configuration to the system location (`/etc/forgejo-runner/.runner`)
|
|
- Sets proper ownership and permissions for CI_SERVICE_USER to access the config
|
|
- Registers the runner with your Forgejo instance
|
|
- Sets up the runner with appropriate labels for Ubuntu and Docker environments
|
|
|
|
**Step 5: Create and Enable Systemd Service**
|
|
|
|
```bash
|
|
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
|
|
[Unit]
|
|
Description=Forgejo Actions Runner
|
|
After=network.target
|
|
|
|
[Service]
|
|
Type=simple
|
|
User=CI_SERVICE_USER
|
|
WorkingDirectory=/etc/forgejo-runner
|
|
ExecStart=/usr/bin/forgejo-runner daemon
|
|
Restart=always
|
|
RestartSec=10
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
|
|
# Enable the service
|
|
sudo systemctl daemon-reload
|
|
sudo systemctl enable forgejo-runner.service
|
|
```
|
|
|
|
**What this does**:
|
|
- Creates the systemd service configuration for the Forgejo runner
|
|
- Sets the working directory to `/etc/forgejo-runner` where the `.runner` configuration file is located
|
|
- The runner will start here but the CI workflow will deploy the application to `/opt/APP_NAME`
|
|
- Enables the service to start automatically on boot
|
|
- Sets up proper restart behavior for reliability
|
|
|
|
#### 5.3 Start Service
|
|
|
|
```bash
|
|
# Start the Forgejo runner service
|
|
sudo systemctl start forgejo-runner.service
|
|
|
|
# Verify the service is running
|
|
sudo systemctl status forgejo-runner.service
|
|
```
|
|
|
|
**Expected Output**: The service should show "active (running)" status.
|
|
|
|
#### 5.4 Test Runner Configuration
|
|
|
|
```bash
|
|
# Check if the runner is running
|
|
sudo systemctl status forgejo-runner.service
|
|
|
|
# Check runner logs
|
|
sudo journalctl -u forgejo-runner.service -f --no-pager
|
|
|
|
# Verify runner appears in Forgejo
|
|
# Go to your Forgejo repository → Settings → Actions → Runners
|
|
# You should see your runner listed as "ci-runner" with status "Online"
|
|
```
|
|
|
|
**Expected Output**:
|
|
- `systemctl status` should show "active (running)"
|
|
- Forgejo web interface should show the runner as online with "ci" label
|
|
|
|
**If something goes wrong**:
|
|
- Check logs: `sudo journalctl -u forgejo-runner.service -f`
|
|
- Verify token: Make sure the registration token is correct
|
|
- Check network: Ensure the runner can reach your Forgejo instance
|
|
- Restart service: `sudo systemctl restart forgejo-runner.service`
|
|
|
|
### Step 6: Set Up Podman-in-Podman (PiP) for CI Operations
|
|
|
|
**Important**: This step sets up a Podman-in-Podman container that provides an isolated environment for CI/CD operations, eliminating resource contention with Podman Registry and simplifying cleanup.
|
|
|
|
#### 6.1 Create Containerized CI/CD Environment
|
|
|
|
```bash
|
|
# Switch to CI_SERVICE_USER (who has Podman access)
|
|
sudo su - CI_SERVICE_USER
|
|
|
|
# Navigate to the application directory
|
|
cd /opt/APP_NAME
|
|
|
|
# Start PiP container for isolated Podman operations
|
|
podman run -d \
|
|
--name ci-pip \
|
|
--privileged \
|
|
-p 2375:2375 \
|
|
-e DOCKER_TLS_CERTDIR="" \
|
|
quay.io/podman/stable:latest
|
|
|
|
# Wait for a minute or two for PiP to be ready (wait for Podman daemon inside PiP)
|
|
|
|
# Test PiP connectivity
|
|
podman exec ci-pip podman version
|
|
```
|
|
|
|
**What this does**:
|
|
- **Creates isolated PiP environment**: Provides isolated Podman environment for all CI/CD operations
|
|
- **Health checks**: Ensures PiP is fully ready before proceeding
|
|
- **Simple setup**: Direct Podman commands for maximum flexibility
|
|
|
|
**Why CI_SERVICE_USER**: The CI_SERVICE_USER has Podman access and runs the CI pipeline, so it needs direct access to the PiP container for seamless CI/CD operations.
|
|
|
|
#### 6.2 Configure PiP for Docker Registry v2
|
|
|
|
```bash
|
|
# Navigate to the application directory
|
|
cd /opt/APP_NAME
|
|
|
|
# Login to Docker Registry v2 from within PiP (using authenticated port 4443)
|
|
echo "your-registry-password" | podman exec -i ci-pip podman login YOUR_CI_CD_IP:4443 -u registry-user --password-stdin
|
|
|
|
# Test Docker Registry v2 connectivity from PiP
|
|
podman exec ci-pip podman pull alpine:latest
|
|
podman exec ci-pip podman tag alpine:latest YOUR_CI_CD_IP:4443/APP_NAME/test:latest
|
|
podman exec ci-pip podman push YOUR_CI_CD_IP:4443/APP_NAME/test:latest
|
|
|
|
# Test unauthenticated pull from standard port 443
|
|
podman exec ci-pip podman pull YOUR_CI_CD_IP/APP_NAME/test:latest
|
|
|
|
# Clean up test images
|
|
podman exec ci-pip podman rmi YOUR_CI_CD_IP:4443/APP_NAME/test:latest
|
|
podman exec ci-pip podman rmi YOUR_CI_CD_IP/APP_NAME/test:latest
|
|
```
|
|
|
|
#### 6.3 Set Up Workspace Directory
|
|
|
|
**Important**: The CI workflow needs a workspace directory for code checkout. This directory will be used by the Forgejo Actions runner.
|
|
|
|
```bash
|
|
# Switch to CI_DEPLOY_USER (who has sudo privileges)
|
|
sudo su - CI_DEPLOY_USER
|
|
|
|
# Create workspace directory in /tmp with proper permissions
|
|
sudo mkdir -p /tmp/ci-workspace
|
|
sudo chown CI_SERVICE_USER:CI_SERVICE_USER /tmp/ci-workspace
|
|
sudo chmod 755 /tmp/ci-workspace
|
|
|
|
# Verify the setup
|
|
ls -la /tmp/ci-workspace
|
|
```
|
|
|
|
**What this does**:
|
|
- **Creates workspace**: Provides a dedicated directory for CI operations
|
|
- **Proper ownership**: CI_SERVICE_USER owns the directory for write access
|
|
- **Appropriate permissions**: 755 allows read/write for owner, read for others
|
|
- **Temporary location**: Uses /tmp for easy cleanup and no persistence needed
|
|
|
|
**Alternative locations** (if you prefer):
|
|
- `/opt/ci-workspace` - More permanent location
|
|
- `/home/CI_SERVICE_USER/workspace` - User's home directory
|
|
- `/var/lib/ci-workspace` - System-managed location
|
|
|
|
**Note**: The CI workflow will use this directory for code checkout and then copy the contents to the DinD container.
|
|
|
|
### FHS-Compliant Directory Structure
|
|
|
|
The Docker Registry setup now follows the Filesystem Hierarchy Standard (FHS) for better organization and security:
|
|
|
|
**Application Files** (in `/opt/APP_NAME/registry/`):
|
|
- `registry-pod.yaml` - Kubernetes Pod manifest for Docker Registry v2 and nginx
|
|
- `nginx.conf` - nginx reverse proxy configuration from project repository
|
|
- `openssl.conf` - OpenSSL configuration for certificate generation from project repository
|
|
- `containers-policy.json` - Container policy for image signature verification
|
|
- `docker-registry.service` - Systemd service file for Docker Registry v2
|
|
|
|
**System Files** (FHS-compliant locations):
|
|
- `/var/lib/registry/data/` - Registry data storage
|
|
- `/etc/registry/certs/` - SSL/TLS certificate hierarchy:
|
|
- `/etc/registry/certs/private/` - Private keys (mode 600)
|
|
- `/etc/registry/certs/ca/` - CA certificates (mode 644)
|
|
- `/etc/registry/certs/requests/` - Certificate requests and configs (mode 644)
|
|
- `/etc/registry/certs/registry.crt` - Server certificate (mode 644)
|
|
- `/etc/registry/auth/.htpasswd` - nginx authentication file (mode 600)
|
|
- `/etc/systemd/system/docker-registry.service` - Systemd service configuration
|
|
- `/var/log/registry/` - Registry and nginx logs
|
|
|
|
**Benefits of FHS Compliance**:
|
|
- **Data persistence**: Registry data stored in `/var/lib/registry/data/` survives container restarts
|
|
- **Certificate security**: Hierarchical certificate structure with proper permissions
|
|
- **Authentication security**: htpasswd file stored in `/etc/registry/auth/` with restrictive permissions (600)
|
|
- **Service management**: Systemd service for proper startup, shutdown, and monitoring
|
|
- **Separation of concerns**: Private keys isolated from public certificates, auth isolated from configs
|
|
- **Log management**: Logs in `/var/log/nginx/` for centralized logging
|
|
- **Configuration separation**: App configs in app directory, system data in system directories
|
|
- **Policy enforcement**: Container policies for image signature verification
|
|
|
|
**What this does**:
|
|
- **Configures certificate trust**: Properly sets up Docker Registry certificate trust in DinD
|
|
- **Fixes ownership issues**: Ensures certificate has correct ownership for CA trust
|
|
- **Tests connectivity**: Verifies DinD can pull, tag, and push images to Docker Registry
|
|
- **Validates setup**: Ensures the complete CI/CD pipeline will work
|
|
|
|
#### 6.4 CI/CD Workflow Architecture
|
|
|
|
The CI/CD pipeline uses a three-stage approach with dedicated environments for each stage:
|
|
|
|
**Job 1 (Testing) - `docker-compose.test.yml`:**
|
|
- **Purpose**: Comprehensive testing with multiple containers
|
|
- **Environment**: DinD with PostgreSQL, Rust, and Node.js containers
|
|
- **Code Checkout**: Code is checked out directly into the DinD container at `/workspace` from the Forgejo repository that triggered the build
|
|
- **Services**:
|
|
- PostgreSQL database for backend tests
|
|
- Rust toolchain for backend testing and migrations
|
|
- Node.js toolchain for frontend testing
|
|
- **Network**: All containers communicate through `ci-cd-test-network`
|
|
- **Setup**: PiP container created, Docker Registry v2 login performed, code cloned into PiP from Forgejo
|
|
- **Cleanup**: Testing containers removed, DinD container kept running
|
|
|
|
**Job 2 (Building) - Direct Docker Commands:**
|
|
- **Purpose**: Image building and pushing to Docker Registry
|
|
- **Environment**: Same DinD container from Job 1
|
|
- **Code Access**: Reuses code from Job 1, updates to latest commit
|
|
- **Process**:
|
|
- Uses Docker Buildx for efficient building
|
|
- Builds backend and frontend images separately
|
|
- Pushes images to Docker Registry
|
|
- **Registry Access**: Reuses Docker Registry authentication from Job 1
|
|
- **Cleanup**: DinD container stopped and removed (clean slate for next run)
|
|
|
|
**Job 3 (Deployment) - `docker-compose.prod.yml`:**
|
|
- **Purpose**: Production deployment with pre-built images
|
|
- **Environment**: Production runner on Production Linode
|
|
- **Process**:
|
|
- Pulls images from Docker Registry
|
|
- Deploys complete application stack
|
|
- Verifies all services are healthy
|
|
- **Services**: PostgreSQL, backend, frontend, Nginx
|
|
|
|
**Key Benefits:**
|
|
- **🧹 Complete Isolation**: Each job has its own dedicated environment
|
|
- **🚫 No Resource Contention**: Testing and building don't interfere with Docker Registry
|
|
- **⚡ Consistent Environment**: Same setup every time
|
|
- **🎯 Purpose-Specific**: Each Docker Compose file serves a specific purpose
|
|
- **🔄 Parallel Safety**: Jobs can run safely in parallel
|
|
|
|
**Testing DinD Setup:**
|
|
|
|
```bash
|
|
# Test DinD functionality
|
|
docker exec ci-dind docker run --rm alpine:latest echo "DinD is working!"
|
|
|
|
# Test Docker Registry integration (using authenticated port for push)
|
|
docker exec ci-dind docker pull alpine:latest
|
|
docker exec ci-dind docker tag alpine:latest YOUR_CI_CD_IP:4443/APP_NAME/dind-test:latest
|
|
docker exec ci-dind docker push YOUR_CI_CD_IP:4443/APP_NAME/dind-test:latest
|
|
|
|
# Test unauthenticated pull
|
|
docker exec ci-dind docker pull YOUR_CI_CD_IP/APP_NAME/dind-test:latest
|
|
|
|
# Clean up test
|
|
docker exec ci-dind docker rmi YOUR_CI_CD_IP:4443/APP_NAME/dind-test:latest
|
|
docker exec ci-dind docker rmi YOUR_CI_CD_IP/APP_NAME/dind-test:latest
|
|
```
|
|
|
|
**Expected Output**:
|
|
- DinD container should be running and accessible
|
|
- Docker commands should work inside DinD
|
|
- Docker Registry push/pull should work from DinD
|
|
|
|
#### 6.5 Production Deployment Architecture
|
|
|
|
The production deployment uses a separate Docker Compose file (`docker-compose.prod.yml`) that pulls built images from the Docker Registry and deploys the complete application stack.
|
|
|
|
**Production Stack Components:**
|
|
- **PostgreSQL**: Production database with persistent storage
|
|
- **Backend**: Rust application built and pushed from CI/CD
|
|
- **Frontend**: Next.js application built and pushed from CI/CD
|
|
- **Nginx**: Reverse proxy with SSL termination
|
|
|
|
**Deployment Flow:**
|
|
1. **Production Runner**: Runs on Production Linode with `production` label
|
|
2. **Image Pull**: Pulls latest images from Docker Registry on CI Linode
|
|
3. **Stack Deployment**: Uses `docker-compose.prod.yml` to deploy complete stack
|
|
4. **Health Verification**: Ensures all services are healthy before completion
|
|
|
|
**Key Benefits:**
|
|
- **🔄 Image Registry**: Centralized image storage in Docker Registry
|
|
- **📦 Consistent Deployment**: Same images tested in CI are deployed to production
|
|
- **⚡ Fast Deployment**: Only pulls changed images
|
|
- **🛡️ Rollback Capability**: Can easily rollback to previous image versions
|
|
- **📊 Health Monitoring**: Built-in health checks for all services
|
|
|
|
#### 6.6 Monitoring Script
|
|
|
|
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for both CI/CD and production monitoring.
|
|
|
|
**Repository Script**:
|
|
- `scripts/monitor.sh` - Comprehensive monitoring script with support for both CI/CD and production environments
|
|
|
|
**To use the repository monitoring script**:
|
|
```bash
|
|
# The repository is already cloned at /opt/APP_NAME/
|
|
cd /opt/APP_NAME
|
|
|
|
# Make the script executable
|
|
chmod +x scripts/monitor.sh
|
|
|
|
# Test CI/CD monitoring
|
|
./scripts/monitor.sh --type ci-cd
|
|
|
|
# Test production monitoring (if you have a production setup)
|
|
./scripts/monitor.sh --type production
|
|
```
|
|
|
|
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
|
|
|
|
### Step 7: Configure Firewall
|
|
|
|
#### 7.1 Configure UFW Firewall
|
|
|
|
```bash
|
|
sudo ufw --force enable
|
|
sudo ufw default deny incoming
|
|
sudo ufw default allow outgoing
|
|
sudo ufw allow ssh
|
|
sudo ufw allow 443/tcp # Docker Registry via nginx (public read access)
|
|
```
|
|
|
|
**Security Model**:
|
|
- **Port 443 (Docker Registry)**: Public read access, authenticated write access
|
|
- **SSH**: Restricted to your IP addresses
|
|
- **All other ports**: Blocked
|
|
|
|
### Step 8: Test CI/CD Setup
|
|
|
|
#### 8.1 Test Podman Installation
|
|
|
|
```bash
|
|
podman --version
|
|
```
|
|
|
|
#### 8.2 Check Docker Registry v2 Status
|
|
|
|
```bash
|
|
cd /opt/APP_NAME/registry
|
|
podman pod ps
|
|
```
|
|
|
|
#### 8.3 Test Docker Registry v2 Access
|
|
|
|
```bash
|
|
# Test Docker Registry v2 API
|
|
curl -k https://localhost:443/v2/_catalog
|
|
|
|
# Test Docker Registry v2 UI
|
|
curl -k -I https://localhost:443
|
|
```
|
|
|
|
---
|
|
|
|
## Part 2: Production Linode Setup
|
|
|
|
### Step 10: Initial System Setup
|
|
|
|
#### 10.1 Update the System
|
|
|
|
```bash
|
|
sudo apt update && sudo apt upgrade -y
|
|
```
|
|
|
|
#### 10.2 Configure Timezone
|
|
|
|
```bash
|
|
# Configure timezone interactively
|
|
sudo dpkg-reconfigure tzdata
|
|
|
|
# Verify timezone setting
|
|
date
|
|
```
|
|
|
|
**What this does**: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
|
|
|
|
**Expected output**: After selecting your timezone, the `date` command should show the current date and time in your selected timezone.
|
|
|
|
#### 10.3 Configure /etc/hosts
|
|
|
|
```bash
|
|
# Add localhost entries for both IPv4 and IPv6
|
|
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
|
|
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
|
|
echo "YOUR_PRODUCTION_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
echo "YOUR_PRODUCTION_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
|
|
# Verify the configuration
|
|
cat /etc/hosts
|
|
```
|
|
|
|
**What this does**:
|
|
- Adds localhost entries for both IPv4 and IPv6 addresses to `/etc/hosts`
|
|
- Ensures proper localhost resolution for both IPv4 and IPv6
|
|
|
|
**Important**: Replace `YOUR_PRODUCTION_IPV4_ADDRESS` and `YOUR_PRODUCTION_IPV6_ADDRESS` with the actual IPv4 and IPv6 addresses of your Production Linode obtained from your Linode dashboard.
|
|
|
|
**Expected output**: The `/etc/hosts` file should show entries for `127.0.0.1`, `::1`, and your Linode's actual IP addresses all mapping to `localhost`.
|
|
|
|
#### 10.4 Install Essential Packages
|
|
|
|
```bash
|
|
sudo apt install -y \
|
|
curl \
|
|
wget \
|
|
git \
|
|
ca-certificates \
|
|
apt-transport-https \
|
|
software-properties-common \
|
|
ufw \
|
|
fail2ban \
|
|
htop \
|
|
nginx \
|
|
certbot \
|
|
python3-certbot-nginx
|
|
```
|
|
|
|
#### 10.5 Secure SSH Configuration
|
|
|
|
**Critical Security Step**: After setting up SSH key authentication, you must disable password authentication and root login to secure your Production server.
|
|
|
|
**Step 1: Edit SSH Configuration File**
|
|
|
|
```bash
|
|
# Open the SSH configuration file using nano
|
|
sudo nano /etc/ssh/sshd_config
|
|
```
|
|
|
|
**Step 2: Disallow Root Logins**
|
|
|
|
Find the line that says:
|
|
```
|
|
#PermitRootLogin prohibit-password
|
|
```
|
|
|
|
Change it to:
|
|
```
|
|
PermitRootLogin no
|
|
```
|
|
|
|
**Step 3: Disable Password Authentication**
|
|
|
|
Find the line that says:
|
|
```
|
|
#PasswordAuthentication yes
|
|
```
|
|
|
|
Change it to:
|
|
```
|
|
PasswordAuthentication no
|
|
```
|
|
|
|
**Step 4: Configure Protocol Family (Optional)**
|
|
|
|
If you only need IPv4 connections, find or add:
|
|
```
|
|
#AddressFamily any
|
|
```
|
|
|
|
Change it to:
|
|
```
|
|
AddressFamily inet
|
|
```
|
|
|
|
**Step 5: Save and Exit**
|
|
|
|
- Press `Ctrl + X` to exit
|
|
- Press `Y` to confirm saving
|
|
- Press `Enter` to confirm the filename
|
|
|
|
**Step 6: Test SSH Configuration**
|
|
|
|
```bash
|
|
# Test the SSH configuration for syntax errors
|
|
sudo sshd -t
|
|
```
|
|
|
|
**Step 7: Restart SSH Service**
|
|
|
|
For Ubuntu 22.10+ (socket-based activation):
|
|
```bash
|
|
sudo systemctl enable --now ssh.service
|
|
```
|
|
|
|
For other distributions:
|
|
```bash
|
|
sudo systemctl restart sshd
|
|
```
|
|
|
|
**Step 8: Verify SSH Access**
|
|
|
|
**IMPORTANT**: Test SSH access from a new terminal window before closing your current session:
|
|
|
|
```bash
|
|
# Test Production Linode
|
|
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "SSH configuration test successful"'
|
|
```
|
|
|
|
**What these changes do:**
|
|
|
|
- **`PermitRootLogin no`**: Completely disables root SSH access
|
|
- **`PasswordAuthentication no`**: Disables password-based authentication
|
|
- **`AddressFamily inet`**: Listens only on IPv4 (optional, for additional security)
|
|
|
|
**Security Benefits:**
|
|
|
|
- **No root access**: Eliminates the most common attack vector
|
|
- **Key-only authentication**: Prevents brute force password attacks
|
|
- **Protocol restriction**: Limits SSH to IPv4 only (if configured)
|
|
|
|
**Emergency Access:**
|
|
|
|
If you lose SSH access, you can still access the server through:
|
|
- **Linode Console**: Use the Linode dashboard's console access
|
|
- **Emergency mode**: Boot into single-user mode if needed
|
|
|
|
**Verification Commands:**
|
|
|
|
```bash
|
|
# Check SSH configuration
|
|
sudo grep -E "(PermitRootLogin|PasswordAuthentication|AddressFamily)" /etc/ssh/sshd_config
|
|
|
|
# Check SSH service status
|
|
sudo systemctl status ssh
|
|
|
|
# Check SSH logs for any issues
|
|
sudo journalctl -u ssh -f
|
|
|
|
# Test SSH access from a new session
|
|
ssh PROD_DEPLOY_USER@YOUR_PRODUCTION_IP 'whoami'
|
|
```
|
|
|
|
**Expected Output:**
|
|
- `PermitRootLogin no`
|
|
- `PasswordAuthentication no`
|
|
- `AddressFamily inet` (if configured)
|
|
- SSH service should be "active (running)"
|
|
- Test commands should return the deployment user name
|
|
|
|
**Important Security Notes:**
|
|
|
|
1. **Test before closing**: Always test SSH access from a new session before closing your current SSH connection
|
|
2. **Keep backup**: You can restore the original configuration if needed
|
|
3. **Monitor logs**: Check `/var/log/auth.log` for SSH activity and potential attacks
|
|
4. **Regular updates**: Keep SSH and system packages updated for security patches
|
|
|
|
**Alternative: Manual Configuration with Backup**
|
|
|
|
If you prefer to manually edit the file with a backup:
|
|
|
|
```bash
|
|
# Create backup
|
|
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup
|
|
|
|
# Edit the file
|
|
sudo nano /etc/ssh/sshd_config
|
|
|
|
# Test configuration
|
|
sudo sshd -t
|
|
|
|
# Restart service
|
|
sudo systemctl restart ssh
|
|
```
|
|
|
|
### Step 11: Create Users
|
|
|
|
#### 11.1 Create the PROD_SERVICE_USER User
|
|
|
|
```bash
|
|
# Create dedicated group for the production service account
|
|
sudo groupadd -r PROD_SERVICE_USER
|
|
|
|
# Create production service account user with dedicated group
|
|
sudo useradd -r -g PROD_SERVICE_USER -s /bin/bash -m -d /home/PROD_SERVICE_USER PROD_SERVICE_USER
|
|
echo "PROD_SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
```
|
|
|
|
#### 11.2 Verify Users
|
|
|
|
```bash
|
|
sudo su - PROD_SERVICE_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
|
|
sudo su - PROD_DEPLOY_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
```
|
|
|
|
### Step 12: Install Podman
|
|
|
|
#### 12.1 Install Podman
|
|
|
|
```bash
|
|
# Install Podman and related tools
|
|
sudo apt install -y podman
|
|
|
|
# Verify installation
|
|
podman --version
|
|
```
|
|
|
|
#### 12.2 Configure Podman for Production Service Account
|
|
|
|
```bash
|
|
# Podman runs rootless by default, no group membership needed
|
|
# The subuid/subgid ranges configured in Step 1.5 enable rootless operation
|
|
```
|
|
|
|
#### 12.4 Create Application Directory
|
|
|
|
```bash
|
|
# Create application directory for deployment
|
|
sudo mkdir -p /opt/APP_NAME
|
|
sudo chown PROD_SERVICE_USER:PROD_SERVICE_USER /opt/APP_NAME
|
|
sudo chmod 755 /opt/APP_NAME
|
|
|
|
# Verify the directory was created correctly
|
|
ls -la /opt/APP_NAME
|
|
```
|
|
|
|
**What this does**:
|
|
- Creates the application directory that will be used for deployment
|
|
- Sets proper ownership for the PROD_SERVICE_USER
|
|
- Ensures the directory exists before the CI workflow runs
|
|
|
|
### Step 13: Configure Podman for Docker Registry v2 Access
|
|
|
|
**Important**: The Production Linode needs to be able to pull images from the Docker Registry v2 on the CI/CD Linode. Since we're using nginx with automatic HTTPS, no additional certificate configuration is needed.
|
|
|
|
```bash
|
|
# Change to the PROD_SERVICE_USER
|
|
sudo su - PROD_SERVICE_USER
|
|
|
|
# Test that Podman can pull images from the Docker Registry v2 (unauthenticated port 443)
|
|
podman pull YOUR_CI_CD_IP/APP_NAME/test:latest
|
|
|
|
# If the pull succeeds, the Docker Registry v2 is accessible for production deployments
|
|
|
|
# Change back to PROD_DEPLOY_USER
|
|
exit
|
|
```
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
|
|
|
|
**Note**: Production deployments use unauthenticated pulls from port 443, while CI/CD operations use authenticated pushes to port 4443.
|
|
|
|
**What this does**:
|
|
- **Tests Docker Registry v2 access**: Verifies that Podman can successfully pull images from the Docker Registry v2
|
|
- **No certificate configuration needed**: nginx handles HTTPS automatically
|
|
- **Simple setup**: No complex certificate management required
|
|
|
|
### Step 14: Set Up Forgejo Runner for Production Deployment
|
|
|
|
**Important**: The Production Linode needs a Forgejo runner to execute the deployment job from the CI/CD workflow. This runner will pull images from Docker Registry and deploy using `docker-compose.prod.yml`.
|
|
|
|
#### 14.1 Download Runner
|
|
|
|
**Important**: Run this step as the **PROD_DEPLOY_USER** (not root or PROD_SERVICE_USER). The PROD_DEPLOY_USER handles deployment tasks including downloading and installing the Forgejo runner.
|
|
|
|
```bash
|
|
cd ~
|
|
|
|
# Get the latest version dynamically
|
|
LATEST_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases | jq -r '.[0].tag_name')
|
|
echo "Downloading Forgejo runner version: $LATEST_VERSION"
|
|
|
|
# Download the latest runner
|
|
wget https://code.forgejo.org/forgejo/runner/releases/download/${LATEST_VERSION}/forgejo-runner-${LATEST_VERSION#v}-linux-amd64
|
|
chmod +x forgejo-runner-${LATEST_VERSION#v}-linux-amd64
|
|
sudo mv forgejo-runner-${LATEST_VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
|
|
```
|
|
|
|
**Alternative: Pin to Specific Version (Recommended for Production)**
|
|
|
|
If you prefer to pin to a specific version for stability, replace the dynamic download with:
|
|
|
|
```bash
|
|
cd ~
|
|
VERSION="v6.3.1" # Pin to specific version
|
|
wget https://code.forgejo.org/forgejo/runner/releases/download/${VERSION}/forgejo-runner-${VERSION#v}-linux-amd64
|
|
chmod +x forgejo-runner-${VERSION#v}-linux-amd64
|
|
sudo mv forgejo-runner-${VERSION#v}-linux-amd64 /usr/bin/forgejo-runner
|
|
```
|
|
|
|
**What this does**:
|
|
- **Dynamic approach**: Downloads the latest stable Forgejo Actions runner
|
|
- **Version pinning**: Allows you to specify a known-good version for production
|
|
- **System installation**: Installs the binary system-wide in `/usr/bin/` for proper Linux structure
|
|
- **Makes the binary executable** and available system-wide
|
|
|
|
**Production Recommendation**: Use version pinning in production environments to ensure consistency and avoid unexpected breaking changes.
|
|
|
|
#### 14.2 Get Registration Token
|
|
|
|
1. Go to your Forgejo repository
|
|
2. Navigate to **Settings → Actions → Runners**
|
|
3. Click **"New runner"**
|
|
4. Copy the registration token
|
|
|
|
#### 14.3 Register the Production Runner
|
|
|
|
**Step 1: Register the Runner**
|
|
|
|
```bash
|
|
# Switch to PROD_DEPLOY_USER to register the runner
|
|
sudo su - PROD_DEPLOY_USER
|
|
|
|
cd ~
|
|
|
|
# Register the runner with your Forgejo instance
|
|
forgejo-runner register \
|
|
--instance https://your-forgejo-instance \
|
|
--token YOUR_REGISTRATION_TOKEN \
|
|
--name "prod-runner" \
|
|
--labels "prod" \
|
|
--no-interactive
|
|
```
|
|
|
|
**Important**: Replace `your-forgejo-instance` with your actual Forgejo instance URL and `YOUR_REGISTRATION_TOKEN` with the token you copied from Step 14.2. Also make sure it ends in a `/`.
|
|
|
|
**Note**: The `your-forgejo-instance` should be the **base URL** of your Forgejo instance (e.g., `https://git.<your-domain>/`), not the full path to the repository. The runner registration process will handle connecting to the specific repository based on the token you provide.
|
|
|
|
**What this does**:
|
|
- Creates the required `.runner` configuration file in the PROD_DEPLOY_USER's home directory
|
|
- Registers the runner with your Forgejo instance
|
|
- Sets up the runner with appropriate labels for production deployment
|
|
|
|
**Step 2: Set Up System Configuration**
|
|
|
|
```bash
|
|
# Create system config directory for Forgejo runner
|
|
sudo mkdir -p /etc/forgejo-runner
|
|
|
|
# Copy the runner configuration to system location
|
|
sudo mv /home/PROD_DEPLOY_USER/.runner /etc/forgejo-runner/.runner
|
|
|
|
# Set proper ownership and permissions
|
|
sudo chown PROD_SERVICE_USER:PROD_SERVICE_USER /etc/forgejo-runner/.runner
|
|
sudo chmod 600 /etc/forgejo-runner/.runner
|
|
```
|
|
|
|
**What this does**:
|
|
- Copies the configuration to the system location (`/etc/forgejo-runner/.runner`)
|
|
- Sets proper ownership and permissions for PROD_SERVICE_USER to access the config
|
|
- Registers the runner with your Forgejo instance
|
|
- Sets up the runner with appropriate labels for production deployment
|
|
|
|
#### 14.4 Create Systemd Service
|
|
|
|
```bash
|
|
# Create systemd service file
|
|
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
|
|
[Unit]
|
|
Description=Forgejo Actions Runner (Production)
|
|
After=network.target docker.service
|
|
|
|
[Service]
|
|
Type=simple
|
|
User=PROD_SERVICE_USER
|
|
WorkingDirectory=/etc/forgejo-runner
|
|
ExecStart=/usr/bin/forgejo-runner daemon
|
|
Restart=always
|
|
RestartSec=10
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
|
|
# Enable and start the service
|
|
sudo systemctl daemon-reload
|
|
sudo systemctl enable forgejo-runner.service
|
|
sudo systemctl start forgejo-runner.service
|
|
|
|
# Verify the service is running
|
|
sudo systemctl status forgejo-runner.service
|
|
```
|
|
|
|
#### 14.5 Test Runner Configuration
|
|
|
|
```bash
|
|
# Check if the runner is running
|
|
sudo systemctl status forgejo-runner.service
|
|
|
|
# Check runner logs
|
|
sudo journalctl -u forgejo-runner.service -f --no-pager
|
|
|
|
# Verify runner appears in Forgejo
|
|
# Go to your Forgejo repository → Settings → Actions → Runners
|
|
# You should see your runner listed as "prod-runner" with status "Online"
|
|
```
|
|
|
|
**Expected Output**:
|
|
- `systemctl status` should show "active (running)"
|
|
- Forgejo web interface should show the runner as online with "prod" label
|
|
|
|
**Important**: The CI/CD workflow (`.forgejo/workflows/ci.yml`) is already configured to use this production runner. The deploy job runs on `runs-on: [self-hosted, prod]`, which means it will execute on any runner with the "prod" label.
|
|
|
|
**Architecture**:
|
|
- **Runner Configuration**: Located in `/etc/forgejo-runner/.runner` (system configuration)
|
|
- **Application Deployment**: Located in `/opt/APP_NAME/` (application software)
|
|
- **Workflow Process**: Runner starts in `/etc/forgejo-runner`, then checks out directly to `/opt/APP_NAME`
|
|
|
|
When the workflow runs, it will:
|
|
|
|
1. Pull the latest Docker images from Docker Registry
|
|
2. Use the `docker-compose.prod.yml` file to deploy the application stack
|
|
3. Create the necessary environment variables for production deployment
|
|
4. Verify that all services are healthy after deployment
|
|
|
|
The production runner will automatically handle the deployment process when you push to the main branch.
|
|
|
|
#### 14.6 Understanding the Production Docker Compose Setup
|
|
|
|
The `docker-compose.prod.yml` file is specifically designed for production deployment and differs from development setups:
|
|
|
|
**Key Features**:
|
|
- **Image-based deployment**: Uses pre-built images from Docker Registry instead of building from source
|
|
- **Production networking**: All services communicate through a dedicated `sharenet-network`
|
|
- **Health checks**: Each service includes health checks to ensure proper startup order
|
|
- **Nginx reverse proxy**: Includes Nginx for SSL termination, load balancing, and security headers
|
|
- **Persistent storage**: PostgreSQL data is stored in a named volume for persistence
|
|
- **Environment variables**: Uses environment variables for configuration (set by the CI/CD workflow)
|
|
|
|
**Service Architecture**:
|
|
1. **PostgreSQL**: Database with health checks and persistent storage
|
|
2. **Backend**: Rust API service that waits for PostgreSQL to be healthy
|
|
3. **Frontend**: Next.js application that waits for backend to be healthy
|
|
4. **Nginx**: Reverse proxy that serves the frontend and proxies API requests to backend
|
|
|
|
**Deployment Process**:
|
|
1. The production runner pulls the latest images from Docker Registry
|
|
2. Creates environment variables for the deployment
|
|
3. Runs `docker compose -f docker-compose.prod.yml up -d`
|
|
4. Waits for all services to be healthy
|
|
5. Verifies the deployment was successful
|
|
|
|
### Step 15: Configure Security
|
|
|
|
#### 15.1 Configure Firewall
|
|
|
|
```bash
|
|
sudo ufw --force enable
|
|
sudo ufw default deny incoming
|
|
sudo ufw default allow outgoing
|
|
sudo ufw allow ssh
|
|
sudo ufw allow 80/tcp
|
|
sudo ufw allow 443/tcp
|
|
```
|
|
|
|
**Security Note**: We only allow ports 80 and 443 for external access. The application services (backend on 3001, frontend on 3000) are only accessible through the Nginx reverse proxy, which provides better security and SSL termination.
|
|
|
|
#### 15.2 Configure Fail2ban
|
|
|
|
**Fail2ban** is an intrusion prevention system that monitors logs and automatically blocks IP addresses showing malicious behavior.
|
|
|
|
```bash
|
|
# Install fail2ban (if not already installed)
|
|
sudo apt install -y fail2ban
|
|
|
|
# Create a custom jail configuration
|
|
sudo tee /etc/fail2ban/jail.local > /dev/null << 'EOF'
|
|
[DEFAULT]
|
|
# Ban time in seconds (24 hours)
|
|
bantime = 86400
|
|
# Find time in seconds (10 minutes)
|
|
findtime = 600
|
|
# Max retries before ban
|
|
maxretry = 3
|
|
# Ban action (use ufw since we're using ufw firewall)
|
|
banaction = ufw
|
|
# Log level
|
|
loglevel = INFO
|
|
# Log target
|
|
logtarget = /var/log/fail2ban.log
|
|
|
|
# SSH protection
|
|
[sshd]
|
|
enabled = true
|
|
port = ssh
|
|
filter = sshd
|
|
logpath = /var/log/auth.log
|
|
maxretry = 3
|
|
|
|
# Note: Nginx protection is handled by the firewall and application-level security
|
|
# Docker containers are isolated, and Nginx logs are not directly accessible to fail2ban
|
|
# Web attack protection is provided by:
|
|
# 1. UFW firewall (ports 80/443 only)
|
|
# 2. Nginx security headers and rate limiting
|
|
# 3. Application-level input validation
|
|
EOF
|
|
|
|
# Enable and start fail2ban
|
|
sudo systemctl enable fail2ban
|
|
sudo systemctl start fail2ban
|
|
|
|
# Verify fail2ban is running
|
|
sudo systemctl status fail2ban
|
|
|
|
# Check current jails
|
|
sudo fail2ban-client status
|
|
```
|
|
|
|
**What this does**:
|
|
- **SSH Protection**: Blocks IPs that fail SSH login 3 times in 10 minutes
|
|
- **24-hour bans**: Banned IPs are blocked for 24 hours
|
|
- **Automatic monitoring**: Continuously watches SSH logs
|
|
|
|
**Web Security Note**: Since Nginx runs in a Docker container, web attack protection is handled by:
|
|
- **UFW Firewall**: Only allows ports 80/443 (no direct access to app services)
|
|
- **Nginx Security**: Built-in rate limiting and security headers
|
|
- **Application Security**: Input validation in the backend/frontend code
|
|
|
|
**Monitoring Fail2ban**:
|
|
```bash
|
|
# Check banned IPs
|
|
sudo fail2ban-client status sshd
|
|
|
|
# Unban an IP if needed
|
|
sudo fail2ban-client set sshd unbanip IP_ADDRESS
|
|
|
|
# View fail2ban logs
|
|
sudo tail -f /var/log/fail2ban.log
|
|
|
|
# Check all active jails
|
|
sudo fail2ban-client status
|
|
|
|
**Why This Matters for Production**:
|
|
- **Your server is exposed**: The Production Linode is accessible from the internet
|
|
- **Automated attacks**: Bots constantly scan for vulnerable servers
|
|
- **Resource protection**: Prevents attackers from consuming CPU/memory
|
|
- **Security layers**: Works with the firewall to provide defense in depth
|
|
|
|
### Step 16: Test Production Setup
|
|
|
|
#### 16.1 Test Podman Installation
|
|
|
|
```bash
|
|
podman --version
|
|
```
|
|
|
|
#### 16.2 Test Docker Registry v2 Access
|
|
|
|
```bash
|
|
# Test pulling an image from the CI/CD Docker Registry v2 (unauthenticated port 443)
|
|
podman pull YOUR_CI_CD_IP/APP_NAME/test:latest
|
|
```
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
|
|
|
|
**Note**: Production uses unauthenticated pulls from the standard HTTPS port (443) for deployment operations.
|
|
|
|
**Note**: Application deployment testing will be done in Step 19 after the complete CI/CD pipeline is set up.
|
|
|
|
---
|
|
|
|
## Part 3: Final Configuration and Testing
|
|
|
|
### Step 17: Configure Forgejo Repository Secrets
|
|
|
|
Go to your Forgejo repository and add these secrets in **Settings → Secrets and Variables → Actions**:
|
|
|
|
**Required Secrets:**
|
|
- `CI_HOST`: Your CI/CD Linode IP address (used for Docker Registry access)
|
|
- `PRODUCTION_IP`: Your Production Linode IP address
|
|
- `PROD_DEPLOY_USER`: The production deployment user name (e.g., `prod-deploy`)
|
|
- `PROD_SERVICE_USER`: The production service user name (e.g., `prod-service`)
|
|
- `APP_NAME`: Your application name (e.g., `sharenet`)
|
|
- `POSTGRES_PASSWORD`: A strong password for the PostgreSQL database
|
|
- `REGISTRY_USER`: Docker Registry v2 username for CI operations (e.g., `registry-user`)
|
|
- `REGISTRY_PASSWORD`: Docker Registry v2 password for CI operations (the password you set in the nginx configuration, default: `your-secure-registry-password`)
|
|
- `REGISTRY_PUSH_URL`: Docker Registry v2 URL for authenticated pushes (e.g., `YOUR_CI_CD_IP:4443`)
|
|
- `REGISTRY_PULL_URL`: Docker Registry v2 URL for unauthenticated pulls (e.g., `YOUR_CI_CD_IP`)
|
|
|
|
|
|
|
|
**Note**: This setup uses custom Dockerfiles for testing environments with base images stored in Docker Registry. The CI pipeline automatically checks if base images exist in Docker Registry and pulls them from Docker Hub only when needed, eliminating rate limiting issues and providing better control over the testing environment.
|
|
|
|
### Step 18: Test Complete Pipeline
|
|
|
|
#### 18.1 Trigger a Test Build
|
|
|
|
1. **Make a small change** to your repository (e.g., update a comment or add a test file)
|
|
2. **Commit and push** the changes to trigger the CI/CD pipeline
|
|
3. **Monitor the build** in your Forgejo repository → Actions tab
|
|
|
|
#### 18.2 Verify Pipeline Steps
|
|
|
|
The pipeline should execute these steps in order:
|
|
|
|
1. **Checkout**: Clone the repository
|
|
2. **Setup DinD**: Configure Docker-in-Docker environment
|
|
3. **Test Backend**: Run backend tests in isolated environment
|
|
4. **Test Frontend**: Run frontend tests in isolated environment
|
|
5. **Build Backend**: Build backend Docker image in DinD
|
|
6. **Build Frontend**: Build frontend Docker image in DinD
|
|
7. **Push to Registry**: Push images to Docker Registry from DinD
|
|
8. **Deploy to Production**: Deploy to production server
|
|
|
|
#### 18.3 Check Docker Registry v2
|
|
|
|
```bash
|
|
# On CI/CD Linode
|
|
cd /opt/APP_NAME
|
|
|
|
# Check if new images were pushed (using unauthenticated port 443)
|
|
curl -k https://localhost:443/v2/_catalog
|
|
|
|
# Check specific repository tags
|
|
curl -k https://localhost:443/v2/APP_NAME/backend/tags/list
|
|
curl -k https://localhost:443/v2/APP_NAME/frontend/tags/list
|
|
|
|
# Alternative: Check registry via public endpoint
|
|
curl -k https://YOUR_CI_CD_IP/v2/_catalog
|
|
|
|
# Check authenticated endpoint (should require authentication)
|
|
curl -k https://YOUR_CI_CD_IP:4443/v2/_catalog
|
|
# Expected: This should return authentication error without credentials
|
|
```
|
|
|
|
#### 18.4 Verify Production Deployment
|
|
|
|
```bash
|
|
# On Production Linode
|
|
cd /opt/APP_NAME
|
|
|
|
# Check if pods are running with new images
|
|
podman pod ps
|
|
|
|
# Check application health
|
|
curl http://localhost:3000
|
|
curl http://localhost:3001/health
|
|
|
|
# Check container logs for any errors
|
|
podman logs sharenet-production-pod-backend
|
|
podman logs sharenet-production-pod-frontend
|
|
```
|
|
|
|
#### 18.5 Test Application Functionality
|
|
|
|
1. **Frontend**: Visit your production URL (IP address)
|
|
2. **Backend API**: Test API endpoints
|
|
3. **Database**: Verify database connections
|
|
4. **Logs**: Check for any errors in application logs
|
|
|
|
### Step 19: Final Verification
|
|
|
|
#### 19.1 Security Check
|
|
|
|
```bash
|
|
# Check firewall status
|
|
sudo ufw status
|
|
|
|
# Check fail2ban status
|
|
sudo systemctl status fail2ban
|
|
|
|
# Check SSH access (should be key-based only)
|
|
sudo grep "PasswordAuthentication" /etc/ssh/sshd_config
|
|
```
|
|
|
|
#### 19.2 Performance Check
|
|
|
|
```bash
|
|
# Check system resources
|
|
htop
|
|
|
|
# Check disk usage
|
|
df -h
|
|
|
|
# Check Docker disk usage
|
|
docker system df
|
|
```
|
|
|
|
#### 19.3 Backup Verification
|
|
|
|
```bash
|
|
# Test backup script
|
|
cd /opt/APP_NAME
|
|
./scripts/backup.sh --dry-run
|
|
|
|
# Run actual backup
|
|
./scripts/backup.sh
|
|
```
|
|
|
|
### Step 20: Documentation and Maintenance
|
|
|
|
#### 20.1 Update Documentation
|
|
|
|
1. **Update README.md** with deployment information
|
|
2. **Document environment variables** and their purposes
|
|
3. **Create troubleshooting guide** for common issues
|
|
4. **Document backup and restore procedures**
|
|
|
|
#### 20.2 Set Up Monitoring Alerts
|
|
|
|
```bash
|
|
# Set up monitoring cron job
|
|
(crontab -l 2>/dev/null; echo "*/5 * * * * cd /opt/APP_NAME && ./scripts/monitor.sh --type production >> /tmp/monitor.log 2>&1") | crontab -
|
|
|
|
# Check monitoring logs
|
|
tail -f /tmp/monitor.log
|
|
```
|
|
|
|
#### 20.3 Regular Maintenance Tasks
|
|
|
|
**Daily:**
|
|
- Check application logs for errors
|
|
- Monitor system resources
|
|
- Verify backup completion
|
|
|
|
**Weekly:**
|
|
- Review security logs
|
|
- Update system packages
|
|
- Test backup restoration
|
|
|
|
**Monthly:**
|
|
- Review and rotate logs
|
|
- Review and update documentation
|
|
|
|
---
|
|
|
|
## 🎉 Congratulations!
|
|
|
|
You have successfully set up a complete CI/CD pipeline with:
|
|
|
|
- ✅ **Automated testing** on every code push in isolated DinD environment
|
|
- ✅ **Docker image building** and Docker Registry storage
|
|
- ✅ **Automated deployment** to production
|
|
- ✅ **Health monitoring** and logging
|
|
- ✅ **Backup and cleanup** automation
|
|
- ✅ **Security hardening** with proper user separation
|
|
- ✅ **SSL/TLS support** with self-signed certificates
|
|
- ✅ **Zero resource contention** between CI/CD and Docker Registry
|
|
- ✅ **FHS-compliant directory structure** for better organization and security
|
|
|
|
Your application is now ready for continuous deployment with proper security, monitoring, and maintenance procedures in place!
|
|
|
|
### Cleanup Installation Files
|
|
|
|
After successful setup, you can clean up the installation files to remove sensitive information:
|
|
|
|
```bash
|
|
# Remove installation files (optional - for security)
|
|
sudo rm -rf /opt/APP_NAME/registry/openssl.conf
|
|
sudo rm -rf /opt/APP_NAME/registry/certs/requests/openssl.conf
|
|
|
|
# Note: DO NOT remove these files as they are needed for operation:
|
|
# - /opt/APP_NAME/registry/registry-pod.yaml
|
|
# - /opt/APP_NAME/registry/nginx.conf
|
|
# - /opt/APP_NAME/registry/containers-policy.json
|
|
# - /opt/APP_NAME/registry/docker-registry.service
|
|
# - /etc/registry/auth/.htpasswd (contains the actual secrets)
|
|
# - /etc/systemd/system/docker-registry.service
|
|
```
|
|
|
|
**Security Note**: The `.htpasswd` file in `/etc/registry/auth/.htpasswd` contains sensitive authentication data and should be:
|
|
- **Backed up securely** if needed for disaster recovery
|
|
- **Never committed to version control**
|
|
- **Protected with proper permissions** (600 - owner read/write only)
|
|
- **Rotated regularly** by updating the password and regenerating the htpasswd file
|
|
|
|
### Step 7.4 CI/CD Workflow Summary Table
|
|
|
|
| Stage | What Runs | How/Where |
|
|
|---------|--------------------------|--------------------------|
|
|
| Test | All integration/unit tests| `docker-compose.test.yml`|
|
|
| Build | Build & push images | Direct Docker commands |
|
|
| Deploy | Deploy to production | `docker-compose.prod.yml`|
|
|
|
|
**How it works:**
|
|
- **Test:** The workflow spins up a full test environment using `docker-compose.test.yml` (Postgres, backend, frontend, etc.) and runs all tests inside containers.
|
|
- **Build:** If tests pass, the workflow uses direct Docker commands (no compose file) to build backend and frontend images and push them to Docker Registry.
|
|
- **Deploy:** The production runner pulls images from Docker Registry and deploys the stack using `docker-compose.prod.yml`.
|
|
|
|
**Expected Output:**
|
|
- Each stage runs in its own isolated environment.
|
|
- Test failures stop the pipeline before any images are built or deployed.
|
|
- Only tested images are deployed to production.
|
|
|
|
### Manual Testing with docker-compose.test.yml
|
|
|
|
You can use the same test environment locally that the CI pipeline uses for integration testing. This is useful for debugging, development, or verifying your setup before pushing changes.
|
|
|
|
**Note**: Since the CI pipeline runs tests inside a DinD container, local testing requires a similar setup.
|
|
|
|
#### Start the Test Environment (Local Development)
|
|
|
|
For local development testing, you can run the test environment directly:
|
|
|
|
```bash
|
|
# Start the test environment locally
|
|
docker compose -f docker-compose.test.yml up -d
|
|
|
|
# Check service health
|
|
docker compose -f docker-compose.test.yml ps
|
|
```
|
|
|
|
**Important**: This local setup is for development only. The CI pipeline uses a more isolated DinD environment.
|
|
|
|
#### Run Tests Manually
|
|
|
|
You can now exec into the containers to run tests or commands as needed. For example:
|
|
```bash
|
|
# Run backend tests
|
|
docker exec ci-cd-test-rust cargo test --all
|
|
|
|
# Run frontend tests
|
|
docker exec ci-cd-test-node npm run test
|
|
```
|
|
|
|
#### Cleanup
|
|
When you're done, stop and remove all test containers:
|
|
```bash
|
|
docker compose -f docker-compose.test.yml down
|
|
```
|
|
|
|
**Tip:** The CI pipeline uses the same test containers but runs them inside a DinD environment for complete isolation. |