sharenet/CI_CD_PIPELINE_SETUP_GUIDE.md
continuist 069ae0fed8
Some checks are pending
CI/CD Pipeline / Test Backend (push) Waiting to run
CI/CD Pipeline / Test Frontend (push) Waiting to run
CI/CD Pipeline / Build and Push Docker Images (push) Blocked by required conditions
CI/CD Pipeline / Deploy to Production (push) Blocked by required conditions
Update healthcheck in registry's docker-compose.yml to use HTTPS
2025-06-28 16:59:01 -04:00

1532 lines
No EOL
43 KiB
Markdown

# CI/CD Pipeline Setup Guide
This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments.
## Architecture Overview
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Forgejo Host │ │ CI/CD Linode │ │ Production Linode│
│ (Repository) │ │ (Actions Runner)│ │ (Docker Deploy) │
│ │ │ + Docker Registry│ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ │
└─────────── Push ──────┼───────────────────────┘
└─── Deploy ────────────┘
```
## Pipeline Flow
1. **Code Push**: Developer pushes code to Forgejo repository
2. **Automated Testing**: CI/CD Linode runs tests on backend and frontend
3. **Image Building**: If tests pass, Docker images are built
4. **Registry Push**: Images are pushed to private registry on CI/CD Linode
5. **Production Deployment**: Production Linode pulls images and deploys
6. **Health Check**: Application is verified and accessible
## Prerequisites
- Two Ubuntu 24.04 LTS Linodes with root access
- Basic familiarity with Linux commands and SSH
- Forgejo repository with Actions enabled
- **Optional**: Domain name for Production Linode (for SSL/TLS)
## Quick Start
1. **Set up CI/CD Linode** (Steps 1-13)
2. **Set up Production Linode** (Steps 14-26)
3. **Configure SSH key exchange** (Step 27)
4. **Set up Forgejo repository secrets** (Step 28)
5. **Test the complete pipeline** (Step 29)
## What's Included
### CI/CD Linode Features
- Forgejo Actions runner for automated builds
- Local Docker registry for image storage
- Registry web UI for image management
- Automated cleanup of old images
- Secure SSH communication with production
### Production Linode Features
- Docker-based application deployment
- **Optional SSL/TLS certificate management** (if domain is provided)
- Nginx reverse proxy with security headers
- Automated backups and monitoring
- Firewall and fail2ban protection
### Pipeline Features
- **Automated testing** on every code push
- **Automated image building** and registry push
- **Automated deployment** to production
- **Rollback capability** with image versioning
- **Health monitoring** and logging
## Security Model and User Separation
This setup uses a **principle of least privilege** approach with separate users for different purposes:
### User Roles
1. **Root User**
- **Purpose**: Initial system setup only
- **SSH Access**: Disabled after setup
- **Privileges**: Full system access (used only during initial configuration)
2. **Deployment User (`DEPLOY_USER`)**
- **Purpose**: SSH access, deployment tasks, system administration
- **SSH Access**: Enabled with key-based authentication
- **Privileges**: Sudo access for deployment and administrative tasks
- **Examples**: `deploy`, `ci`, `admin`
3. **Service Account (`SERVICE_USER`)**
- **Purpose**: Running application services (Docker containers, databases)
- **SSH Access**: None (no login shell)
- **Privileges**: No sudo access, minimal system access
- **Examples**: `appuser`, `service`, `app`
### Security Benefits
- **No root SSH access**: Eliminates the most common attack vector
- **Principle of least privilege**: Each user has only the access they need
- **Separation of concerns**: Deployment tasks vs. service execution are separate
- **Audit trail**: Clear distinction between deployment and service activities
- **Reduced attack surface**: Service account has minimal privileges
### File Permissions
- **Application files**: Owned by `SERVICE_USER` for security
- **Docker operations**: Run by `DEPLOY_USER` with sudo (deployment only)
- **Service execution**: Run by `SERVICE_USER` (no sudo needed)
---
## Prerequisites and Initial Setup
### What's Already Done (Assumptions)
This guide assumes you have already:
1. **Created two Ubuntu 24.04 LTS Linodes** with root access
2. **Set root passwords** for both Linodes
3. **Have SSH client** installed on your local machine
4. **Have Forgejo repository** with Actions enabled
5. **Optional**: Domain name pointing to Production Linode's IP addresses
### Step 0: Initial SSH Access and Verification
Before proceeding with the setup, you need to establish initial SSH access to both Linodes.
#### 0.1 Get Your Linode IP Addresses
From your Linode dashboard, note the IP addresses for:
- **CI/CD Linode**: `YOUR_CI_CD_IP` (IP address only, no domain needed)
- **Production Linode**: `YOUR_PRODUCTION_IP` (IP address for SSH, domain for web access)
#### 0.2 Test Initial SSH Access
Test SSH access to both Linodes:
```bash
# Test CI/CD Linode (IP address only)
ssh root@YOUR_CI_CD_IP
# Test Production Linode (IP address only)
ssh root@YOUR_PRODUCTION_IP
```
**Expected output**: SSH login prompt asking for root password.
**If something goes wrong**:
- Verify the IP addresses are correct
- Check that SSH is enabled on the Linodes
- Ensure your local machine can reach the Linodes (no firewall blocking)
#### 0.3 Choose Your Names
Before proceeding, decide on:
1. **Service Account Name**: Choose a username for the service account (e.g., `appuser`, `deploy`, `service`)
- Replace `SERVICE_USER` in this guide with your chosen name
- This account runs the actual application services
2. **Deployment User Name**: Choose a username for deployment tasks (e.g., `deploy`, `ci`, `admin`)
- Replace `DEPLOY_USER` in this guide with your chosen name
- This account has sudo privileges for deployment tasks
3. **Application Name**: Choose a name for your application (e.g., `myapp`, `webapp`, `api`)
- Replace `APP_NAME` in this guide with your chosen name
4. **Domain Name** (Optional): If you have a domain, note it for SSL configuration
- Replace `your-domain.com` in this guide with your actual domain
**Example**:
- If you choose `appuser` as service account, `deploy` as deployment user, and `myapp` as application name:
- Replace all `SERVICE_USER` with `appuser`
- Replace all `DEPLOY_USER` with `deploy`
- Replace all `APP_NAME` with `myapp`
- If you have a domain `example.com`, replace `your-domain.com` with `example.com`
**Security Model**:
- **Service Account (`SERVICE_USER`)**: Runs application services, no sudo access
- **Deployment User (`DEPLOY_USER`)**: Handles deployments via SSH, has sudo access
- **Root**: Only used for initial setup, then disabled for SSH access
#### 0.4 Set Up SSH Key Authentication for Local Development
**Important**: This step should be done on both Linodes to enable secure SSH access from your local development machine.
##### 0.4.1 Generate SSH Key on Your Local Machine
On your local development machine, generate an SSH key pair:
```bash
# Generate SSH key pair (if you don't already have one)
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""
# Or use existing key if you have one
ls ~/.ssh/id_ed25519.pub
```
##### 0.4.2 Add Your Public Key to Both Linodes
Copy your public key to both Linodes:
```bash
# Copy your public key to CI/CD Linode
ssh-copy-id root@YOUR_CI_CD_IP
# Copy your public key to Production Linode
ssh-copy-id root@YOUR_PRODUCTION_IP
```
**Alternative method** (if ssh-copy-id doesn't work):
```bash
# Copy your public key content
cat ~/.ssh/id_ed25519.pub
# Then manually add to each server
ssh root@YOUR_CI_CD_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
ssh root@YOUR_PRODUCTION_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
```
##### 0.4.3 Test SSH Key Authentication
Test that you can access both servers without passwords:
```bash
# Test CI/CD Linode
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'
# Test Production Linode
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'
```
**Expected output**: The echo messages should appear without password prompts.
##### 0.4.4 Create Deployment Users
On both Linodes, create the deployment user with sudo privileges:
```bash
# Create deployment user
sudo useradd -m -s /bin/bash DEPLOY_USER
sudo usermod -aG sudo DEPLOY_USER
# Set a secure password (for emergency access only)
echo "DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
# Copy your SSH key to the deployment user
sudo mkdir -p /home/DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/DEPLOY_USER/.ssh/
sudo chown -R DEPLOY_USER:DEPLOY_USER /home/DEPLOY_USER/.ssh
sudo chmod 700 /home/DEPLOY_USER/.ssh
sudo chmod 600 /home/DEPLOY_USER/.ssh/authorized_keys
# Configure sudo to use SSH key authentication (most secure)
echo "DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/DEPLOY_USER
```
**Security Note**: This configuration allows the DEPLOY_USER to use sudo without a password, which is more secure for CI/CD automation since there are no passwords to store or expose. The random password is set for emergency console access only.
##### 0.4.5 Test Sudo Access
Test that the deployment user can use sudo without password prompts:
```bash
# Test sudo access
ssh DEPLOY_USER@YOUR_CI_CD_IP 'sudo whoami'
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'sudo whoami'
```
**Expected output**: Both commands should return `root` without prompting for a password.
##### 0.4.6 Test Deployment User Access
Test that you can access both servers as the deployment user:
```bash
# Test CI/CD Linode
ssh DEPLOY_USER@YOUR_CI_CD_IP 'echo "Deployment user SSH access works for CI/CD"'
# Test Production Linode
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Deployment user SSH access works for Production"'
```
**Expected output**: The echo messages should appear without password prompts.
##### 0.4.7 Create SSH Config for Easy Access
On your local machine, create an SSH config file for easy access:
```bash
# Create SSH config
cat > ~/.ssh/config << 'EOF'
Host ci-cd-dev
HostName YOUR_CI_CD_IP
User DEPLOY_USER
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking no
Host production-dev
HostName YOUR_PRODUCTION_IP
User DEPLOY_USER
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking no
EOF
chmod 600 ~/.ssh/config
```
Now you can access servers easily:
```bash
ssh ci-cd-dev
ssh production-dev
```
---
## Part 1: CI/CD Linode Setup
### Step 1: Initial System Setup
#### 1.1 Update the System
```bash
sudo apt update && sudo apt upgrade -y
```
**What this does**: Updates package lists and upgrades all installed packages to their latest versions.
**Expected output**: A list of packages being updated, followed by completion messages.
#### 1.2 Configure Timezone
```bash
# Configure timezone interactively
sudo dpkg-reconfigure tzdata
# Verify timezone setting
date
```
**What this does**: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
**Expected output**: After selecting your timezone, the `date` command should show the current date and time in your selected timezone.
#### 1.3 Configure /etc/hosts
```bash
# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
# Verify the configuration
cat /etc/hosts
```
**What this does**:
- Adds localhost entries for both IPv4 and IPv6 addresses to `/etc/hosts`
- Ensures proper localhost resolution for both IPv4 and IPv6
**Important**: Replace `YOUR_CI_CD_IPV4_ADDRESS` and `YOUR_CI_CD_IPV6_ADDRESS` with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.
**Expected output**: The `/etc/hosts` file should show entries for `127.0.0.1`, `::1`, and your Linode's actual IP addresses all mapping to `localhost`.
#### 1.4 Install Essential Packages
```bash
sudo apt install -y \
curl \
wget \
git \
build-essential \
pkg-config \
libssl-dev \
ca-certificates \
apt-transport-https \
software-properties-common \
apache2-utils
```
**What this does**: Installs development tools, SSL libraries, and utilities needed for Docker and application building.
### Step 2: Create Users
#### 2.1 Create Service Account
```bash
# Create dedicated group for the service account
sudo groupadd -r SERVICE_USER
# Create service account user with dedicated group
sudo useradd -r -g SERVICE_USER -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
```
#### 2.2 Verify Users
```bash
sudo su - SERVICE_USER
whoami
pwd
exit
sudo su - DEPLOY_USER
whoami
pwd
exit
```
### Step 3: Install Docker
#### 3.1 Add Docker Repository
```bash
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
```
#### 3.2 Install Docker Packages
```bash
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
```
#### 3.3 Configure Docker for Service Account
```bash
sudo usermod -aG docker SERVICE_USER
```
### Step 4: Set Up Docker Registry
#### 4.1 Create Registry Directory
```bash
sudo mkdir -p /opt/registry
sudo chown SERVICE_USER:SERVICE_USER /opt/registry
```
#### 4.2 Create Registry Configuration
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cat > /opt/registry/config.yml << 'EOF'
version: 0.1
log:
level: info
storage:
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
cache:
blobdescriptor: inmemory
http:
addr: :5000
tls:
certificate: /etc/docker/registry/ssl/registry.crt
key: /etc/docker/registry/ssl/registry.key
headers:
X-Content-Type-Options: [nosniff]
X-Frame-Options: [DENY]
X-XSS-Protection: [1; mode=block]
Access-Control-Allow-Origin: ["*"]
Access-Control-Allow-Methods: ["HEAD", "GET", "OPTIONS", "DELETE"]
Access-Control-Allow-Headers: ["Authorization", "Content-Type", "Accept", "Accept-Encoding", "Accept-Language", "Cache-Control", "Connection", "DNT", "Pragma", "User-Agent"]
# Public read access, authentication required for push
auth:
htpasswd:
realm: basic-realm
path: /etc/docker/registry/auth/auth.htpasswd
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
EOF
# Exit SERVICE_USER shell
exit
```
**What this configuration does:**
- **HTTPS Enabled**: Uses TLS certificates for secure communication
- **Public Read Access**: Anyone can pull images without authentication
- **Authenticated Push**: Only authenticated users can push images
- **Security Headers**: Protects against common web vulnerabilities
- **CORS Headers**: Allows the registry UI to access the registry API with all necessary headers
- **No Secret Key**: The `secret` field was unnecessary and has been removed
**Security Note**: We switch to SERVICE_USER because the registry directory is owned by SERVICE_USER, maintaining proper file ownership and security.
#### 4.2.1 Generate SSL Certificates
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
# Create SSL directory
mkdir -p /opt/registry/ssl
# Generate self-signed certificate
openssl req -x509 -newkey rsa:4096 -keyout /opt/registry/ssl/registry.key -out /opt/registry/ssl/registry.crt -days 365 -nodes -subj "/C=US/ST=State/L=City/O=Organization/CN=YOUR_CI_CD_IP"
# Set proper permissions
chmod 600 /opt/registry/ssl/registry.key
chmod 644 /opt/registry/ssl/registry.crt
# Exit SERVICE_USER shell
exit
```
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address in the certificate generation command.
#### 4.3 Create Authentication File
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
# Create htpasswd file for authentication (required for push operations only)
mkdir -p /opt/registry/auth
htpasswd -Bbn push-user "$(openssl rand -base64 32)" > /opt/registry/auth/auth.htpasswd
# Exit SERVICE_USER shell
exit
```
**What this does**: Creates user credentials for registry authentication.
- `push-user`: Can push images (used by CI/CD pipeline for deployments)
**Note**: Pull operations are public and don't require authentication, but push operations require these credentials.
#### 4.4 Create Docker Compose for Registry
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cat > /opt/registry/docker-compose.yml << 'EOF'
version: '3.8'
services:
registry:
image: registry:2
ports:
- "5000:5000"
volumes:
- ./config.yml:/etc/docker/registry/config.yml:ro
- ./auth/auth.htpasswd:/etc/docker/registry/auth/auth.htpasswd:ro
- ./ssl/registry.crt:/etc/docker/registry/ssl/registry.crt:ro
- ./ssl/registry.key:/etc/docker/registry/ssl/registry.key:ro
- registry_data:/var/lib/registry
restart: unless-stopped
networks:
- registry_network
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:5000/v2/_catalog"]
interval: 30s
timeout: 10s
retries: 3
registry-ui:
image: joxit/docker-registry-ui:latest
expose:
- "80"
environment:
- REGISTRY_TITLE=APP_NAME Registry
- REGISTRY_URL=https://YOUR_CI_CD_IP:8080
depends_on:
registry:
condition: service_healthy
restart: unless-stopped
networks:
- registry_network
nginx:
image: nginx:alpine
ports:
- "8080:443"
volumes:
- ./ssl:/etc/nginx/ssl:ro
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- registry-ui
restart: unless-stopped
networks:
- registry_network
volumes:
registry_data:
networks:
registry_network:
driver: bridge
EOF
# Exit SERVICE_USER shell
exit
```
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address in the `REGISTRY_URL` environment variable.
**Note**: We updated the volume mounts to explicitly map individual certificate files to their expected locations in the registry container.
#### 4.4.1 Create Nginx Configuration
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cat > /opt/registry/nginx.conf << 'EOF'
events {
worker_connections 1024;
}
http {
upstream registry_ui {
server registry-ui:80;
}
upstream registry_api {
server registry:5000;
}
server {
listen 443 ssl;
server_name YOUR_CI_CD_IP;
ssl_certificate /etc/nginx/ssl/registry.crt;
ssl_certificate_key /etc/nginx/ssl/registry.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
# Proxy registry API requests
location /v2/ {
proxy_pass http://registry_api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
# Proxy registry UI requests
location / {
proxy_pass http://registry_ui;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
}
}
EOF
# Exit SERVICE_USER shell
exit
```
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address in the nginx configuration.
#### 4.5 Install Required Tools
```bash
# Install htpasswd utility
sudo apt install -y apache2-utils
```
#### 4.6 Start Registry
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cd /opt/registry
docker compose up -d
# Exit SERVICE_USER shell
exit
```
#### 4.6.1 Restart Registry with Updated Configuration
If you've already started the registry and then updated the `REGISTRY_URL` in the docker-compose.yml file, you need to restart the containers for the changes to take effect:
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cd /opt/registry
# Stop and remove the existing containers
docker compose down
# Start the containers with the updated configuration
docker compose up -d
# Exit SERVICE_USER shell
exit
```
**Note**: This step is only needed if you've already started the registry and then updated the `REGISTRY_URL`. If you're starting fresh, Step 4.6 is sufficient.
#### 4.6.2 Troubleshoot Connection Issues
If you get "Unable to Connect" when accessing `https://YOUR_CI_CD_IP:8080`, run these diagnostic commands:
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cd /opt/registry
# Check if all containers are running
docker compose ps
# Check container logs for errors
docker compose logs nginx
docker compose logs registry-ui
docker compose logs registry
# Check if nginx is listening on port 8080
netstat -tlnp | grep :8080
# Test nginx directly
curl -k https://localhost:8080
# Exit SERVICE_USER shell
exit
```
**Common Issues and Solutions:**
- **Container not running**: Run `docker compose up -d` to start containers
- **Port conflict**: Check if port 8080 is already in use
- **SSL certificate issues**: Verify the certificate files exist and have correct permissions
- **Firewall blocking**: Ensure port 8080 is open in your firewall
#### 4.6.3 Fix Container Restart Issues
If containers are restarting repeatedly, check the logs and fix the configuration:
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cd /opt/registry
# Stop all containers
docker compose down
# Check if SSL certificates exist
ls -la ssl/
# If certificates don't exist, generate them
if [ ! -f ssl/registry.crt ]; then
echo "Generating SSL certificates..."
mkdir -p ssl
openssl req -x509 -newkey rsa:4096 -keyout ssl/registry.key -out ssl/registry.crt -days 365 -nodes -subj "/C=US/ST=State/L=City/O=Organization/CN=YOUR_CI_CD_IP"
chmod 600 ssl/registry.key
chmod 644 ssl/registry.crt
fi
# Check if nginx.conf exists
ls -la nginx.conf
# If nginx.conf doesn't exist, create it
if [ ! -f nginx.conf ]; then
echo "Creating nginx configuration..."
cat > nginx.conf << 'EOF'
events {
worker_connections 1024;
}
http {
upstream registry_ui {
server registry-ui:80;
}
upstream registry_api {
server registry:5000;
}
server {
listen 443 ssl;
server_name YOUR_CI_CD_IP;
ssl_certificate /etc/nginx/ssl/registry.crt;
ssl_certificate_key /etc/nginx/ssl/registry.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
# Proxy registry API requests
location /v2/ {
proxy_pass http://registry_api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Proxy registry UI requests
location / {
proxy_pass http://registry_ui;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
EOF
fi
# Replace YOUR_CI_CD_IP with actual IP in nginx.conf
sed -i "s/YOUR_CI_CD_IP/YOUR_ACTUAL_CI_CD_IP/g" nginx.conf
# Start containers and check logs
docker compose up -d
# Wait a moment, then check logs
sleep 5
docker compose logs nginx
docker compose logs registry
# Exit SERVICE_USER shell
exit
```
**Important**: Replace `YOUR_ACTUAL_CI_CD_IP` with your actual CI/CD Linode IP address in the command above.
#### 4.7 Test Registry Setup
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
# Check if containers are running
cd /opt/registry
docker compose ps
# Test registry API (HTTPS via nginx)
curl -k https://localhost:8080/v2/_catalog
# Test registry UI (HTTPS via nginx)
curl -I https://localhost:8080
# Test Docker push/pull (optional but recommended)
# Create a test image
echo "FROM alpine:latest" > /tmp/test.Dockerfile
echo "RUN echo 'Hello from test image'" >> /tmp/test.Dockerfile
# Build and tag test image
docker build -f /tmp/test.Dockerfile -t localhost:8080/test:latest /tmp
# Push to registry
docker push localhost:8080/test:latest
# Verify image is in registry
curl -k https://localhost:8080/v2/_catalog
curl -k https://localhost:8080/v2/test/tags/list
# Pull image back (verifies pull works)
docker rmi localhost:8080/test:latest
docker pull localhost:8080/test:latest
# Clean up test image completely
# Remove from local Docker
docker rmi localhost:8080/test:latest
# Clean up test file
rm /tmp/test.Dockerfile
# Clean up test repository using registry UI
# 1. Open your browser and go to: https://YOUR_CI_CD_IP:8080
# 2. You should see the 'test' repository listed
# 3. Click on the 'test' repository
# 4. Click the delete button (trash icon) next to the 'latest' tag
# 5. Confirm the deletion
# 6. The test repository should now be removed
# Verify registry is empty
curl -k https://localhost:8080/v2/_catalog
# Exit SERVICE_USER shell
exit
```
**Important Notes:**
- **Registry API**: Uses HTTPS on port 5000 (secure)
- **Registry UI**: Uses HTTPS on port 8080 (secure, via nginx reverse proxy)
- **Access URLs**:
- Registry UI: `https://YOUR_CI_CD_IP:8080` (use HTTPS)
- Registry API: `https://YOUR_CI_CD_IP:5000`
- **Browser Access**: Both services now use HTTPS for secure communication
**Expected Output**:
- `docker-compose ps` should show both `registry` and `registry-ui` as "Up"
- `curl -k https://localhost:5000/v2/_catalog` should return `{"repositories":[]}` (empty initially)
- `curl -I https://localhost:8080` should return HTTP 200
- Push/pull test should complete successfully
**If something goes wrong**:
- Check container logs: `docker compose logs`
- Verify ports are open: `netstat -tlnp | grep :5000`
- Check Docker daemon config: `cat /etc/docker/daemon.json`
- Restart registry: `docker compose restart`
### Step 5: Configure Docker for Registry Access
#### 5.1 Configure Docker for Registry Access
```bash
# Get the push user credentials
PUSH_USER="push-user"
PUSH_PASSWORD=$(grep push-user /opt/registry/auth/auth.htpasswd | cut -d: -f2)
# Copy the certificate to Docker's trusted certificates
sudo cp /opt/registry/ssl/registry.crt /usr/local/share/ca-certificates/registry.crt
sudo update-ca-certificates
sudo tee /etc/docker/daemon.json << EOF
{
"insecure-registries": [],
"registry-mirrors": [],
"auths": {
"YOUR_CI_CD_IP:8080": {
"auth": "$(echo -n "${PUSH_USER}:${PUSH_PASSWORD}" | base64)"
}
}
}
EOF
```
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
#### 5.2 Restart Docker
```bash
sudo systemctl restart docker
```
### Public Registry Access Model
Your registry is now configured with the following access model:
#### **Public Read Access**
Anyone can pull images without authentication:
```bash
# From any machine (public access)
docker pull YOUR_CI_CD_IP:5000/APP_NAME/backend:latest
docker pull YOUR_CI_CD_IP:5000/APP_NAME/frontend:latest
```
#### **Authenticated Write Access**
Only the CI/CD Linode can push images (using credentials):
```bash
# From CI/CD Linode only (authenticated)
docker push YOUR_CI_CD_IP:5000/APP_NAME/backend:latest
docker push YOUR_CI_CD_IP:5000/APP_NAME/frontend:latest
```
#### **Registry UI Access**
Public web interface for browsing images:
```
https://YOUR_CI_CD_IP:8080
```
#### **Client Configuration**
For other machines to pull images, they only need:
```bash
# Add to /etc/docker/daemon.json on client machines
{
"insecure-registries": ["YOUR_CI_CD_IP:5000"]
}
# No authentication needed for pulls
```
### Step 6: Set Up SSH for Production Communication
#### 6.1 Generate SSH Key Pair
```bash
ssh-keygen -t ed25519 -C "ci-cd-server" -f ~/.ssh/id_ed25519 -N ""
```
#### 6.2 Create SSH Config
```bash
cat > ~/.ssh/config << 'EOF'
Host production
HostName YOUR_PRODUCTION_IP
User DEPLOY_USER
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOF
chmod 600 ~/.ssh/config
```
### Step 7: Install Forgejo Actions Runner
#### 7.1 Download Runner
```bash
cd ~
wget https://code.forgejo.org/forgejo/runner/releases/download/v0.2.11/forgejo-runner-0.2.11-linux-amd64
chmod +x forgejo-runner-0.2.11-linux-amd64
sudo mv forgejo-runner-0.2.11-linux-amd64 /usr/local/bin/forgejo-runner
```
#### 7.2 Create Systemd Service
```bash
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
[Unit]
Description=Forgejo Actions Runner
After=network.target
[Service]
Type=simple
User=SERVICE_USER
WorkingDirectory=/home/SERVICE_USER
ExecStart=/usr/local/bin/forgejo-runner daemon
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
```
#### 7.3 Enable Service
```bash
sudo systemctl daemon-reload
sudo systemctl enable forgejo-runner.service
```
#### 7.4 Test Runner Configuration
```bash
# Check if the runner is running
sudo systemctl status forgejo-runner.service
# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager
# Test runner connectivity (in a separate terminal)
forgejo-runner list
# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-cd-runner" with status "Online"
```
**Expected Output**:
- `systemctl status` should show "active (running)"
- `forgejo-runner list` should show your runner
- Forgejo web interface should show the runner as online
**If something goes wrong**:
- Check logs: `sudo journalctl -u forgejo-runner.service -f`
- Verify token: Make sure the registration token is correct
- Check network: Ensure the runner can reach your Forgejo instance
- Restart service: `sudo systemctl restart forgejo-runner.service`
### Step 8: Set Up Monitoring and Cleanup
#### 8.1 Monitoring Script
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for both CI/CD and production monitoring.
**Repository Script**:
- `scripts/monitor.sh` - Comprehensive monitoring script with support for both CI/CD and production environments
**To use the repository monitoring script**:
```bash
# Clone the repository if not already done
git clone https://your-forgejo-instance/your-username/APP_NAME.git /tmp/monitoring-setup
cd /tmp/monitoring-setup
# Make the script executable
chmod +x scripts/monitor.sh
# Test CI/CD monitoring
./scripts/monitor.sh --type ci-cd
# Test production monitoring (if you have a production setup)
./scripts/monitor.sh --type production
# Clean up
cd /
rm -rf /tmp/monitoring-setup
```
**Alternative: Create a local copy for convenience**:
```bash
# Copy the script to your home directory for easy access
cp /tmp/monitoring-setup/scripts/monitor.sh ~/monitor.sh
chmod +x ~/monitor.sh
# Test the local copy
~/monitor.sh --type ci-cd
```
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
#### 8.2 Cleanup Script
**Important**: The repository includes a pre-configured cleanup script in the `scripts/` directory that can be used for both CI/CD and production cleanup operations.
**Repository Script**:
- `scripts/cleanup.sh` - Comprehensive cleanup script with support for both CI/CD and production environments
**To use the repository cleanup script**:
```bash
# Clone the repository if not already done
git clone https://your-forgejo-instance/your-username/APP_NAME.git /tmp/cleanup-setup
cd /tmp/cleanup-setup
# Make the script executable
chmod +x scripts/cleanup.sh
# Test CI/CD cleanup (dry run first)
./scripts/cleanup.sh --type ci-cd --dry-run
# Run CI/CD cleanup
./scripts/cleanup.sh --type ci-cd
# Test production cleanup (dry run first)
./scripts/cleanup.sh --type production --dry-run
# Clean up
cd /
rm -rf /tmp/cleanup-setup
```
**Alternative: Create a local copy for convenience**:
```bash
# Copy the script to your home directory for easy access
cp /tmp/cleanup-setup/scripts/cleanup.sh ~/cleanup.sh
chmod +x ~/cleanup.sh
# Test the local copy (dry run)
~/cleanup.sh --type ci-cd --dry-run
```
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, dry-run mode, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate cleanup operations.
#### 8.3 Test Cleanup Script
```bash
# Create some test images to clean up
docker pull alpine:latest
docker pull nginx:latest
docker tag alpine:latest test-cleanup:latest
docker tag nginx:latest test-cleanup2:latest
# Test cleanup with dry run first
./scripts/cleanup.sh --type ci-cd --dry-run
# Run the cleanup script
./scripts/cleanup.sh --type ci-cd
# Verify cleanup worked
echo "Checking remaining images:"
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
echo "Checking remaining volumes:"
docker volume ls
echo "Checking remaining networks:"
docker network ls
```
**Expected Output**:
- Cleanup script should run without errors
- Test images should be removed
- System should report cleanup completion
- Remaining images should be minimal (only actively used ones)
**If something goes wrong**:
- Check script permissions: `ls -la scripts/cleanup.sh`
- Verify Docker access: `docker ps`
- Check registry access: `cd /opt/registry && docker compose ps`
- Run manually: `bash -x scripts/cleanup.sh`
#### 8.4 Set Up Automated Cleanup
```bash
# Create a cron job to run cleanup daily at 3 AM using the repository script
(crontab -l 2>/dev/null; echo "0 3 * * * cd /tmp/cleanup-setup && ./scripts/cleanup.sh --type ci-cd >> /tmp/cleanup.log 2>&1") | crontab -
# Verify the cron job was added
crontab -l
```
**What this does:**
- **Runs automatically**: The cleanup script runs every day at 3:00 AM
- **Frequency**: Daily cleanup to prevent disk space issues
- **Logging**: All cleanup output is logged to `/tmp/cleanup.log`
- **What it cleans**: Unused Docker images, volumes, networks, and registry images
**Alternative: Use a local copy for automated cleanup**:
```bash
# If you created a local copy, use that instead
(crontab -l 2>/dev/null; echo "0 3 * * * ~/cleanup.sh --type ci-cd >> ~/cleanup.log 2>&1") | crontab -
```
### Step 9: Configure Firewall
```bash
sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 5000/tcp # Docker registry (public read access)
sudo ufw allow 8080/tcp # Registry UI (public read access)
```
**Security Model**:
- **Port 5000 (Registry)**: Public read access, authenticated write access
- **Port 8080 (UI)**: Public read access for browsing images
- **SSH**: Restricted to your IP addresses
- **All other ports**: Blocked
### Step 10: Test CI/CD Setup
#### 10.1 Test Docker Installation
```bash
docker --version
docker compose --version
```
#### 10.2 Check Registry Status
```bash
cd /opt/registry
docker compose ps
```
#### 10.3 Test Registry Access
```bash
curl http://localhost:5000/v2/_catalog
```
#### 10.4 Get Public Key for Production Server
```bash
cat ~/.ssh/id_ed25519.pub
```
**Important**: Copy this public key - you'll need it for the production server setup.
### Step 11: Configure Forgejo Actions Runner
#### 11.1 Get Runner Token
1. Go to your Forgejo repository
2. Navigate to Settings → Actions → Runners
3. Click "New runner"
4. Copy the registration token
#### 11.2 Configure Runner
```bash
# Get the registration token from your Forgejo repository
# Go to Settings → Actions → Runners → New runner
# Copy the registration token
# Configure the runner
forgejo-runner register \
--instance https://your-forgejo-instance \
--token YOUR_TOKEN \
--name "ci-cd-runner" \
--labels "ubuntu-latest,docker" \
--no-interactive
```
#### 11.3 Start Runner
```bash
sudo systemctl start forgejo-runner.service
sudo systemctl status forgejo-runner.service
```
#### 11.4 Test Runner Configuration
```bash
# Check if the runner is running
sudo systemctl status forgejo-runner.service
# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager
# Test runner connectivity (in a separate terminal)
forgejo-runner list
# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-cd-runner" with status "Online"
```
**Expected Output**:
- `systemctl status` should show "active (running)"
- `forgejo-runner list` should show your runner
- Forgejo web interface should show the runner as online
**If something goes wrong**:
- Check logs: `sudo journalctl -u forgejo-runner.service -f`
- Verify token: Make sure the registration token is correct
- Check network: Ensure the runner can reach your Forgejo instance
- Restart service: `sudo systemctl restart forgejo-runner.service`
---
## Part 2: Production Linode Setup
### Step 12: Initial System Setup
#### 12.1 Update the System
```bash
sudo apt update && sudo apt upgrade -y
```
#### 12.2 Configure Timezone
```bash
# Configure timezone interactively
sudo dpkg-reconfigure tzdata
# Verify timezone setting
date
```
**What this does**: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
**Expected output**: After selecting your timezone, the `date` command should show the current date and time in your selected timezone.
#### 12.3 Configure /etc/hosts
```bash
# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
# Verify the configuration
cat /etc/hosts
```
**What this does**:
- Adds localhost entries for both IPv4 and IPv6 addresses to `/etc/hosts`
- Ensures proper localhost resolution for both IPv4 and IPv6
**Important**: Replace `YOUR_PRODUCTION_IPV4_ADDRESS` and `YOUR_PRODUCTION_IPV6_ADDRESS` with the actual IPv4 and IPv6 addresses of your Production Linode obtained from your Linode dashboard.
**Expected output**: The `/etc/hosts` file should show entries for `127.0.0.1`, `::1`, and your Linode's actual IP addresses all mapping to `localhost`.
#### 12.4 Install Essential Packages
```bash
sudo apt install -y \
curl \
wget \
git \
ca-certificates \
apt-transport-https \
software-properties-common \
ufw \
fail2ban \
htop \
nginx \
certbot \
python3-certbot-nginx
```
### Step 13: Create Users
#### 13.1 Create the SERVICE_USER User
```bash
# Create dedicated group for the service account
sudo groupadd -r SERVICE_USER
# Create service account user with dedicated group
sudo useradd -r -g SERVICE_USER -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
```
#### 13.2 Verify Users
```bash
sudo su - SERVICE_USER
whoami
pwd
exit
sudo su - DEPLOY_USER
whoami
pwd
exit
```
### Step 14: Install Docker
#### 14.1 Add Docker Repository
```bash
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
```
#### 14.2 Install Docker Packages
```bash
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
```
#### 14.3 Configure Docker for Service Account
```bash
sudo usermod -aG docker SERVICE_USER
```
### Step 15: Install Docker Compose
```bash
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```
### Step 16: Configure Security
#### 16.1 Configure Firewall
```bash
sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 3000/tcp
sudo ufw allow 3001/tcp
```
#### 16.2 Configure Fail2ban
```bash
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
```
### Step 17: Create Application Directory
#### 17.1 Create Directory Structure
```bash
sudo mkdir -p /opt/APP_NAME
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME
```
**Note**: Replace `APP_NAME` with your actual application name. This directory name can be controlled via the `APP_NAME` secret in your Forgejo repository settings. If you set the `APP_NAME` secret to `myapp`, the deployment directory will be `/opt/myapp`.
#### 17.2 Create SSL Directory (Optional - for domain users)
```bash
sudo mkdir -p /opt/APP_NAME/nginx/ssl
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME/nginx/ssl
```
### Step 18: Clone Repository and Set Up Application Files
#### 18.1 Switch to SERVICE_USER User
```bash
sudo su - SERVICE_USER
```
#### 18.2 Clone Repository
```bash
cd /opt/APP_NAME
git clone https://your-forgejo-instance/your-username/APP_NAME.git .
```
**Important**: The repository includes a pre-configured `nginx/nginx.conf` file that handles both SSL and non-SSL scenarios, with proper security headers, rate limiting, and CORS configuration. This file will be automatically used by the Docker Compose setup.
**Important**: The repository also includes a pre-configured `.forgejo/workflows/ci.yml` file that handles the complete CI/CD pipeline including testing, building, and deployment. This workflow is already set up to work with the private registry and production deployment.
**Note**: Replace `your-forgejo-instance` and `your-username/APP_NAME` with your actual Forgejo instance URL and repository path.
#### 18.3 Create Environment File
The repository doesn't include a `.env.example` file for security reasons. The CI/CD pipeline will create the `.env` file dynamically during deployment. However, for manual testing or initial setup, you can create a basic `.env` file:
```bash
cat > /opt/APP_NAME/.env << 'EOF'
# Production Environment Variables
POSTGRES_PASSWORD=your_secure_password_here
REGISTRY=YOUR_CI_CD_IP:5000
IMAGE_NAME=APP_NAME
IMAGE_TAG=latest
# Database Configuration
POSTGRES_DB=sharenet
POSTGRES_USER=sharenet
DATABASE_URL=postgresql://sharenet:your_secure_password_here@postgres:5432/sharenet
# Application Configuration
NODE_ENV=production
RUST_LOG=info
RUST_BACKTRACE=1
EOF
```
**Important**: Replace `YOUR_CI_CD_IP`