1830 lines
No EOL
53 KiB
Markdown
1830 lines
No EOL
53 KiB
Markdown
# CI/CD Pipeline Setup Guide
|
|
|
|
This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments.
|
|
|
|
## Architecture Overview
|
|
|
|
```
|
|
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
|
│ Forgejo Host │ │ CI/CD Linode │ │ Production Linode│
|
|
│ (Repository) │ │ (Actions Runner)│ │ (Docker Deploy) │
|
|
│ │ │ + Docker Registry│ │ │
|
|
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
|
│ │ │
|
|
│ │ │
|
|
└─────────── Push ──────┼───────────────────────┘
|
|
│
|
|
└─── Deploy ────────────┘
|
|
```
|
|
|
|
## Pipeline Flow
|
|
|
|
1. **Code Push**: Developer pushes code to Forgejo repository
|
|
2. **Automated Testing**: CI/CD Linode runs tests on backend and frontend
|
|
3. **Image Building**: If tests pass, Docker images are built
|
|
4. **Registry Push**: Images are pushed to private registry on CI/CD Linode
|
|
5. **Production Deployment**: Production Linode pulls images and deploys
|
|
6. **Health Check**: Application is verified and accessible
|
|
|
|
## Prerequisites
|
|
|
|
- Two Ubuntu 24.04 LTS Linodes with root access
|
|
- Basic familiarity with Linux commands and SSH
|
|
- Forgejo repository with Actions enabled
|
|
- **Optional**: Domain name for Production Linode (for SSL/TLS)
|
|
|
|
## Quick Start
|
|
|
|
1. **Set up CI/CD Linode** (Steps 1-13)
|
|
2. **Set up Production Linode** (Steps 14-26)
|
|
3. **Configure SSH key exchange** (Step 27)
|
|
4. **Set up Forgejo repository secrets** (Step 28)
|
|
5. **Test the complete pipeline** (Step 29)
|
|
|
|
## What's Included
|
|
|
|
### CI/CD Linode Features
|
|
- Forgejo Actions runner for automated builds
|
|
- Local Docker registry for image storage
|
|
- Registry web UI for image management
|
|
- Automated cleanup of old images
|
|
- Secure SSH communication with production
|
|
|
|
### Production Linode Features
|
|
- Docker-based application deployment
|
|
- **Optional SSL/TLS certificate management** (if domain is provided)
|
|
- Nginx reverse proxy with security headers
|
|
- Automated backups and monitoring
|
|
- Firewall and fail2ban protection
|
|
|
|
### Pipeline Features
|
|
- **Automated testing** on every code push
|
|
- **Automated image building** and registry push
|
|
- **Automated deployment** to production
|
|
- **Rollback capability** with image versioning
|
|
- **Health monitoring** and logging
|
|
|
|
## Security Model and User Separation
|
|
|
|
This setup uses a **principle of least privilege** approach with separate users for different purposes:
|
|
|
|
### User Roles
|
|
|
|
1. **Root User**
|
|
- **Purpose**: Initial system setup only
|
|
- **SSH Access**: Disabled after setup
|
|
- **Privileges**: Full system access (used only during initial configuration)
|
|
|
|
2. **Deployment User (`DEPLOY_USER`)**
|
|
- **Purpose**: SSH access, deployment tasks, system administration
|
|
- **SSH Access**: Enabled with key-based authentication
|
|
- **Privileges**: Sudo access for deployment and administrative tasks
|
|
- **Examples**: `deploy`, `ci`, `admin`
|
|
|
|
3. **Service Account (`SERVICE_USER`)**
|
|
- **Purpose**: Running application services (Docker containers, databases)
|
|
- **SSH Access**: None (no login shell)
|
|
- **Privileges**: No sudo access, minimal system access
|
|
- **Examples**: `appuser`, `service`, `app`
|
|
|
|
### Security Benefits
|
|
|
|
- **No root SSH access**: Eliminates the most common attack vector
|
|
- **Principle of least privilege**: Each user has only the access they need
|
|
- **Separation of concerns**: Deployment tasks vs. service execution are separate
|
|
- **Audit trail**: Clear distinction between deployment and service activities
|
|
- **Reduced attack surface**: Service account has minimal privileges
|
|
|
|
### File Permissions
|
|
|
|
- **Application files**: Owned by `SERVICE_USER` for security
|
|
- **Docker operations**: Run by `DEPLOY_USER` with sudo (deployment only)
|
|
- **Service execution**: Run by `SERVICE_USER` (no sudo needed)
|
|
|
|
---
|
|
|
|
## Prerequisites and Initial Setup
|
|
|
|
### What's Already Done (Assumptions)
|
|
|
|
This guide assumes you have already:
|
|
|
|
1. **Created two Ubuntu 24.04 LTS Linodes** with root access
|
|
2. **Set root passwords** for both Linodes
|
|
3. **Have SSH client** installed on your local machine
|
|
4. **Have Forgejo repository** with Actions enabled
|
|
5. **Optional**: Domain name pointing to Production Linode's IP addresses
|
|
|
|
### Step 0: Initial SSH Access and Verification
|
|
|
|
Before proceeding with the setup, you need to establish initial SSH access to both Linodes.
|
|
|
|
#### 0.1 Get Your Linode IP Addresses
|
|
|
|
From your Linode dashboard, note the IP addresses for:
|
|
- **CI/CD Linode**: `YOUR_CI_CD_IP` (IP address only, no domain needed)
|
|
- **Production Linode**: `YOUR_PRODUCTION_IP` (IP address for SSH, domain for web access)
|
|
|
|
#### 0.2 Test Initial SSH Access
|
|
|
|
Test SSH access to both Linodes:
|
|
|
|
```bash
|
|
# Test CI/CD Linode (IP address only)
|
|
ssh root@YOUR_CI_CD_IP
|
|
|
|
# Test Production Linode (IP address only)
|
|
ssh root@YOUR_PRODUCTION_IP
|
|
```
|
|
|
|
**Expected output**: SSH login prompt asking for root password.
|
|
|
|
**If something goes wrong**:
|
|
- Verify the IP addresses are correct
|
|
- Check that SSH is enabled on the Linodes
|
|
- Ensure your local machine can reach the Linodes (no firewall blocking)
|
|
|
|
#### 0.3 Choose Your Names
|
|
|
|
Before proceeding, decide on:
|
|
|
|
1. **Service Account Name**: Choose a username for the service account (e.g., `appuser`, `deploy`, `service`)
|
|
- Replace `SERVICE_USER` in this guide with your chosen name
|
|
- This account runs the actual application services
|
|
|
|
2. **Deployment User Name**: Choose a username for deployment tasks (e.g., `deploy`, `ci`, `admin`)
|
|
- Replace `DEPLOY_USER` in this guide with your chosen name
|
|
- This account has sudo privileges for deployment tasks
|
|
|
|
3. **Application Name**: Choose a name for your application (e.g., `myapp`, `webapp`, `api`)
|
|
- Replace `APP_NAME` in this guide with your chosen name
|
|
|
|
4. **Domain Name** (Optional): If you have a domain, note it for SSL configuration
|
|
- Replace `your-domain.com` in this guide with your actual domain
|
|
|
|
**Example**:
|
|
- If you choose `appuser` as service account, `deploy` as deployment user, and `myapp` as application name:
|
|
- Replace all `SERVICE_USER` with `appuser`
|
|
- Replace all `DEPLOY_USER` with `deploy`
|
|
- Replace all `APP_NAME` with `myapp`
|
|
- If you have a domain `example.com`, replace `your-domain.com` with `example.com`
|
|
|
|
**Security Model**:
|
|
- **Service Account (`SERVICE_USER`)**: Runs application services, no sudo access
|
|
- **Deployment User (`DEPLOY_USER`)**: Handles deployments via SSH, has sudo access
|
|
- **Root**: Only used for initial setup, then disabled for SSH access
|
|
|
|
#### 0.4 Set Up SSH Key Authentication for Local Development
|
|
|
|
**Important**: This step should be done on both Linodes to enable secure SSH access from your local development machine.
|
|
|
|
##### 0.4.1 Generate SSH Key on Your Local Machine
|
|
|
|
On your local development machine, generate an SSH key pair:
|
|
|
|
```bash
|
|
# Generate SSH key pair (if you don't already have one)
|
|
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""
|
|
|
|
# Or use existing key if you have one
|
|
ls ~/.ssh/id_ed25519.pub
|
|
```
|
|
|
|
##### 0.4.2 Add Your Public Key to Both Linodes
|
|
|
|
Copy your public key to both Linodes:
|
|
|
|
```bash
|
|
# Copy your public key to CI/CD Linode
|
|
ssh-copy-id root@YOUR_CI_CD_IP
|
|
|
|
# Copy your public key to Production Linode
|
|
ssh-copy-id root@YOUR_PRODUCTION_IP
|
|
```
|
|
|
|
**Alternative method** (if ssh-copy-id doesn't work):
|
|
```bash
|
|
# Copy your public key content
|
|
cat ~/.ssh/id_ed25519.pub
|
|
|
|
# Then manually add to each server
|
|
ssh root@YOUR_CI_CD_IP
|
|
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
|
|
|
|
ssh root@YOUR_PRODUCTION_IP
|
|
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
|
|
```
|
|
|
|
##### 0.4.3 Test SSH Key Authentication
|
|
|
|
Test that you can access both servers without passwords:
|
|
|
|
```bash
|
|
# Test CI/CD Linode
|
|
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'
|
|
|
|
# Test Production Linode
|
|
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'
|
|
```
|
|
|
|
**Expected output**: The echo messages should appear without password prompts.
|
|
|
|
##### 0.4.4 Create Deployment Users
|
|
|
|
On both Linodes, create the deployment user with sudo privileges:
|
|
|
|
```bash
|
|
# Create deployment user
|
|
sudo useradd -m -s /bin/bash DEPLOY_USER
|
|
sudo usermod -aG sudo DEPLOY_USER
|
|
|
|
# Set a secure password (you won't need it for SSH key auth, but it's good practice)
|
|
echo "DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
|
|
# Copy your SSH key to the deployment user
|
|
sudo mkdir -p /home/DEPLOY_USER/.ssh
|
|
sudo cp ~/.ssh/authorized_keys /home/DEPLOY_USER/.ssh/
|
|
sudo chown -R DEPLOY_USER:DEPLOY_USER /home/DEPLOY_USER/.ssh
|
|
sudo chmod 700 /home/DEPLOY_USER/.ssh
|
|
sudo chmod 600 /home/DEPLOY_USER/.ssh/authorized_keys
|
|
```
|
|
|
|
##### 0.4.5 Disable Root SSH Access
|
|
|
|
On both Linodes, disable root SSH access for security:
|
|
|
|
```bash
|
|
# Edit SSH configuration
|
|
sudo nano /etc/ssh/sshd_config
|
|
```
|
|
|
|
Find and modify these lines:
|
|
```
|
|
PasswordAuthentication no
|
|
PermitRootLogin no
|
|
PubkeyAuthentication yes
|
|
```
|
|
|
|
**Note**: We disable root SSH access entirely and use the deployment user for all SSH operations.
|
|
|
|
Restart SSH service:
|
|
```bash
|
|
sudo systemctl restart ssh
|
|
```
|
|
|
|
**Important**: Test SSH access with the deployment user before closing your current session to ensure you don't get locked out.
|
|
|
|
##### 0.4.6 Test Deployment User Access
|
|
|
|
Test that you can access both servers as the deployment user:
|
|
|
|
```bash
|
|
# Test CI/CD Linode
|
|
ssh DEPLOY_USER@YOUR_CI_CD_IP 'echo "Deployment user SSH access works for CI/CD"'
|
|
|
|
# Test Production Linode
|
|
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Deployment user SSH access works for Production"'
|
|
```
|
|
|
|
**Expected output**: The echo messages should appear without password prompts.
|
|
|
|
##### 0.4.7 Create SSH Config for Easy Access
|
|
|
|
On your local machine, create an SSH config file for easy access:
|
|
|
|
```bash
|
|
# Create SSH config
|
|
cat > ~/.ssh/config << 'EOF'
|
|
Host ci-cd-dev
|
|
HostName YOUR_CI_CD_IP
|
|
User DEPLOY_USER
|
|
IdentityFile ~/.ssh/id_ed25519
|
|
StrictHostKeyChecking no
|
|
|
|
Host production-dev
|
|
HostName YOUR_PRODUCTION_IP
|
|
User DEPLOY_USER
|
|
IdentityFile ~/.ssh/id_ed25519
|
|
StrictHostKeyChecking no
|
|
EOF
|
|
|
|
chmod 600 ~/.ssh/config
|
|
```
|
|
|
|
Now you can access servers easily:
|
|
```bash
|
|
ssh ci-cd-dev
|
|
ssh production-dev
|
|
```
|
|
|
|
---
|
|
|
|
## Part 1: CI/CD Linode Setup
|
|
|
|
### Step 1: Initial System Setup
|
|
|
|
#### 1.1 Update the System
|
|
|
|
```bash
|
|
sudo apt update && sudo apt upgrade -y
|
|
```
|
|
|
|
**What this does**: Updates package lists and upgrades all installed packages to their latest versions.
|
|
|
|
**Expected output**: A list of packages being updated, followed by completion messages.
|
|
|
|
#### 1.2 Configure Timezone
|
|
|
|
```bash
|
|
# Configure timezone interactively
|
|
sudo dpkg-reconfigure tzdata
|
|
|
|
# Verify timezone setting
|
|
date
|
|
```
|
|
|
|
**What this does**: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
|
|
|
|
**Expected output**: After selecting your timezone, the `date` command should show the current date and time in your selected timezone.
|
|
|
|
#### 1.3 Configure /etc/hosts
|
|
|
|
```bash
|
|
# Add localhost entries for both IPv4 and IPv6
|
|
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
|
|
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
|
|
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
|
|
# Verify the configuration
|
|
cat /etc/hosts
|
|
```
|
|
|
|
**What this does**:
|
|
- Adds localhost entries for both IPv4 and IPv6 addresses to `/etc/hosts`
|
|
- Ensures proper localhost resolution for both IPv4 and IPv6
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IPV4_ADDRESS` and `YOUR_CI_CD_IPV6_ADDRESS` with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.
|
|
|
|
**Expected output**: The `/etc/hosts` file should show entries for `127.0.0.1`, `::1`, and your Linode's actual IP addresses all mapping to `localhost`.
|
|
|
|
#### 1.4 Install Essential Packages
|
|
|
|
```bash
|
|
sudo apt install -y \
|
|
curl \
|
|
wget \
|
|
git \
|
|
build-essential \
|
|
pkg-config \
|
|
libssl-dev \
|
|
ca-certificates \
|
|
apt-transport-https \
|
|
software-properties-common \
|
|
apache2-utils
|
|
```
|
|
|
|
**What this does**: Installs development tools, SSL libraries, and utilities needed for Docker and application building.
|
|
|
|
### Step 2: Create Users
|
|
|
|
#### 2.1 Create Service Account
|
|
|
|
```bash
|
|
sudo useradd -r -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
|
|
sudo usermod -aG sudo SERVICE_USER
|
|
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
```
|
|
|
|
#### 2.2 Verify Users
|
|
|
|
```bash
|
|
sudo su - SERVICE_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
|
|
sudo su - DEPLOY_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
```
|
|
|
|
### Step 3: Install Docker
|
|
|
|
#### 3.1 Add Docker Repository
|
|
|
|
```bash
|
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
sudo apt update
|
|
```
|
|
|
|
#### 3.2 Install Docker Packages
|
|
|
|
```bash
|
|
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
|
|
```
|
|
|
|
#### 3.3 Configure Docker for Service Account
|
|
|
|
```bash
|
|
sudo usermod -aG docker SERVICE_USER
|
|
```
|
|
|
|
### Step 4: Set Up Docker Registry
|
|
|
|
#### 4.1 Create Registry Directory
|
|
|
|
```bash
|
|
sudo mkdir -p /opt/registry
|
|
sudo chown SERVICE_USER:SERVICE_USER /opt/registry
|
|
```
|
|
|
|
#### 4.2 Create Registry Configuration
|
|
|
|
```bash
|
|
cat > /opt/registry/config.yml << 'EOF'
|
|
version: 0.1
|
|
log:
|
|
level: info
|
|
storage:
|
|
filesystem:
|
|
rootdirectory: /var/lib/registry
|
|
delete:
|
|
enabled: true
|
|
cache:
|
|
blobdescriptor: inmemory
|
|
http:
|
|
addr: :5000
|
|
headers:
|
|
X-Content-Type-Options: [nosniff]
|
|
X-Frame-Options: [DENY]
|
|
X-XSS-Protection: [1; mode=block]
|
|
# Enable public read access
|
|
secret: "your-secret-key-here"
|
|
# Restrict write access to specific IPs
|
|
auth:
|
|
htpasswd:
|
|
realm: basic-realm
|
|
path: /etc/docker/registry/auth.htpasswd
|
|
health:
|
|
storagedriver:
|
|
enabled: true
|
|
interval: 10s
|
|
threshold: 3
|
|
EOF
|
|
```
|
|
|
|
#### 4.3 Create Authentication File
|
|
|
|
```bash
|
|
# Create htpasswd file for authentication
|
|
mkdir -p /opt/registry/auth
|
|
htpasswd -Bbn push-user "$(openssl rand -base64 32)" > /opt/registry/auth.htpasswd
|
|
|
|
# Create a read-only user (optional, for additional security)
|
|
htpasswd -Bbn read-user "$(openssl rand -base64 32)" >> /opt/registry/auth.htpasswd
|
|
```
|
|
|
|
#### 4.4 Create Docker Compose for Registry
|
|
|
|
```bash
|
|
cat > /opt/registry/docker-compose.yml << 'EOF'
|
|
version: '3.8'
|
|
|
|
services:
|
|
registry:
|
|
image: registry:2
|
|
ports:
|
|
- "5000:5000"
|
|
volumes:
|
|
- ./config.yml:/etc/docker/registry/config.yml:ro
|
|
- ./auth.htpasswd:/etc/docker/registry/auth.htpasswd:ro
|
|
- registry_data:/var/lib/registry
|
|
restart: unless-stopped
|
|
networks:
|
|
- registry_network
|
|
|
|
registry-ui:
|
|
image: joxit/docker-registry-ui:latest
|
|
ports:
|
|
- "8080:80"
|
|
environment:
|
|
- REGISTRY_TITLE=APP_NAME Registry
|
|
- REGISTRY_URL=http://registry:5000
|
|
depends_on:
|
|
- registry
|
|
restart: unless-stopped
|
|
networks:
|
|
- registry_network
|
|
|
|
volumes:
|
|
registry_data:
|
|
|
|
networks:
|
|
registry_network:
|
|
driver: bridge
|
|
EOF
|
|
```
|
|
|
|
#### 4.5 Install Required Tools
|
|
|
|
```bash
|
|
# Install htpasswd utility
|
|
sudo apt install -y apache2-utils
|
|
```
|
|
|
|
#### 4.6 Start Registry
|
|
|
|
```bash
|
|
cd /opt/registry
|
|
docker-compose up -d
|
|
```
|
|
|
|
#### 4.7 Test Registry Setup
|
|
|
|
```bash
|
|
# Check if containers are running
|
|
cd /opt/registry
|
|
docker-compose ps
|
|
|
|
# Test registry API
|
|
curl http://localhost:5000/v2/_catalog
|
|
|
|
# Test registry UI (optional)
|
|
curl -I http://localhost:8080
|
|
|
|
# Test Docker push/pull (optional but recommended)
|
|
# Create a test image
|
|
echo "FROM alpine:latest" > /tmp/test.Dockerfile
|
|
echo "RUN echo 'Hello from test image'" >> /tmp/test.Dockerfile
|
|
|
|
# Build and tag test image
|
|
docker build -f /tmp/test.Dockerfile -t localhost:5000/test:latest /tmp
|
|
|
|
# Push to registry
|
|
docker push localhost:5000/test:latest
|
|
|
|
# Verify image is in registry
|
|
curl http://localhost:5000/v2/_catalog
|
|
curl http://localhost:5000/v2/test/tags/list
|
|
|
|
# Pull image back (verifies pull works)
|
|
docker rmi localhost:5000/test:latest
|
|
docker pull localhost:5000/test:latest
|
|
|
|
# Clean up test image
|
|
docker rmi localhost:5000/test:latest
|
|
rm /tmp/test.Dockerfile
|
|
```
|
|
|
|
**Expected Output**:
|
|
- `docker-compose ps` should show both `registry` and `registry-ui` as "Up"
|
|
- `curl http://localhost:5000/v2/_catalog` should return `{"repositories":[]}` (empty initially)
|
|
- `curl -I http://localhost:8080` should return HTTP 200
|
|
- Push/pull test should complete successfully
|
|
|
|
**If something goes wrong**:
|
|
- Check container logs: `docker-compose logs`
|
|
- Verify ports are open: `netstat -tlnp | grep :5000`
|
|
- Check Docker daemon config: `cat /etc/docker/daemon.json`
|
|
- Restart registry: `docker-compose restart`
|
|
|
|
### Step 5: Configure Docker for Registry Access
|
|
|
|
#### 5.1 Configure Docker for Registry Access
|
|
|
|
```bash
|
|
# Get the push user credentials
|
|
PUSH_USER="push-user"
|
|
PUSH_PASSWORD=$(grep push-user /opt/registry/auth.htpasswd | cut -d: -f2)
|
|
|
|
sudo tee /etc/docker/daemon.json << EOF
|
|
{
|
|
"insecure-registries": ["YOUR_CI_CD_IP:5000"],
|
|
"registry-mirrors": [],
|
|
"auths": {
|
|
"YOUR_CI_CD_IP:5000": {
|
|
"auth": "$(echo -n "${PUSH_USER}:${PUSH_PASSWORD}" | base64)"
|
|
}
|
|
}
|
|
}
|
|
EOF
|
|
```
|
|
|
|
#### 5.2 Restart Docker
|
|
|
|
```bash
|
|
sudo systemctl restart docker
|
|
```
|
|
|
|
### Public Registry Access Model
|
|
|
|
Your registry is now configured with the following access model:
|
|
|
|
#### **Public Read Access**
|
|
Anyone can pull images without authentication:
|
|
```bash
|
|
# From any machine (public access)
|
|
docker pull YOUR_CI_CD_IP:5000/APP_NAME/backend:latest
|
|
docker pull YOUR_CI_CD_IP:5000/APP_NAME/frontend:latest
|
|
```
|
|
|
|
#### **Authenticated Write Access**
|
|
Only the CI/CD Linode can push images (using credentials):
|
|
```bash
|
|
# From CI/CD Linode only (authenticated)
|
|
docker push YOUR_CI_CD_IP:5000/APP_NAME/backend:latest
|
|
docker push YOUR_CI_CD_IP:5000/APP_NAME/frontend:latest
|
|
```
|
|
|
|
#### **Registry UI Access**
|
|
Public web interface for browsing images:
|
|
```
|
|
http://YOUR_CI_CD_IP:8080
|
|
```
|
|
|
|
#### **Client Configuration**
|
|
For other machines to pull images, they only need:
|
|
```bash
|
|
# Add to /etc/docker/daemon.json on client machines
|
|
{
|
|
"insecure-registries": ["YOUR_CI_CD_IP:5000"]
|
|
}
|
|
# No authentication needed for pulls
|
|
```
|
|
|
|
### Step 6: Set Up SSH for Production Communication
|
|
|
|
#### 6.1 Generate SSH Key Pair
|
|
|
|
```bash
|
|
ssh-keygen -t ed25519 -C "ci-cd-server" -f ~/.ssh/id_ed25519 -N ""
|
|
```
|
|
|
|
#### 6.2 Create SSH Config
|
|
|
|
```bash
|
|
cat > ~/.ssh/config << 'EOF'
|
|
Host production
|
|
HostName YOUR_PRODUCTION_IP
|
|
User DEPLOY_USER
|
|
IdentityFile ~/.ssh/id_ed25519
|
|
StrictHostKeyChecking no
|
|
UserKnownHostsFile /dev/null
|
|
EOF
|
|
|
|
chmod 600 ~/.ssh/config
|
|
```
|
|
|
|
### Step 7: Install Forgejo Actions Runner
|
|
|
|
#### 7.1 Download Runner
|
|
|
|
```bash
|
|
cd ~
|
|
wget https://code.forgejo.org/forgejo/runner/releases/download/v0.2.11/forgejo-runner-0.2.11-linux-amd64
|
|
chmod +x forgejo-runner-0.2.11-linux-amd64
|
|
sudo mv forgejo-runner-0.2.11-linux-amd64 /usr/local/bin/forgejo-runner
|
|
```
|
|
|
|
#### 7.2 Create Systemd Service
|
|
|
|
```bash
|
|
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
|
|
[Unit]
|
|
Description=Forgejo Actions Runner
|
|
After=network.target
|
|
|
|
[Service]
|
|
Type=simple
|
|
User=SERVICE_USER
|
|
WorkingDirectory=/home/SERVICE_USER
|
|
ExecStart=/usr/local/bin/forgejo-runner daemon
|
|
Restart=always
|
|
RestartSec=10
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
```
|
|
|
|
#### 7.3 Enable Service
|
|
|
|
```bash
|
|
sudo systemctl daemon-reload
|
|
sudo systemctl enable forgejo-runner.service
|
|
```
|
|
|
|
#### 7.4 Test Runner Configuration
|
|
|
|
```bash
|
|
# Check if the runner is running
|
|
sudo systemctl status forgejo-runner.service
|
|
|
|
# Check runner logs
|
|
sudo journalctl -u forgejo-runner.service -f --no-pager
|
|
|
|
# Test runner connectivity (in a separate terminal)
|
|
forgejo-runner list
|
|
|
|
# Verify runner appears in Forgejo
|
|
# Go to your Forgejo repository → Settings → Actions → Runners
|
|
# You should see your runner listed as "ci-cd-runner" with status "Online"
|
|
```
|
|
|
|
**Expected Output**:
|
|
- `systemctl status` should show "active (running)"
|
|
- `forgejo-runner list` should show your runner
|
|
- Forgejo web interface should show the runner as online
|
|
|
|
**If something goes wrong**:
|
|
- Check logs: `sudo journalctl -u forgejo-runner.service -f`
|
|
- Verify token: Make sure the registration token is correct
|
|
- Check network: Ensure the runner can reach your Forgejo instance
|
|
- Restart service: `sudo systemctl restart forgejo-runner.service`
|
|
|
|
### Step 8: Set Up Monitoring and Cleanup
|
|
|
|
#### 8.1 Monitoring Script
|
|
|
|
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for both CI/CD and production monitoring.
|
|
|
|
**Repository Script**:
|
|
- `scripts/monitor.sh` - Comprehensive monitoring script with support for both CI/CD and production environments
|
|
|
|
**To use the repository monitoring script**:
|
|
```bash
|
|
# Clone the repository if not already done
|
|
git clone https://your-forgejo-instance/your-username/APP_NAME.git /tmp/monitoring-setup
|
|
cd /tmp/monitoring-setup
|
|
|
|
# Make the script executable
|
|
chmod +x scripts/monitor.sh
|
|
|
|
# Test CI/CD monitoring
|
|
./scripts/monitor.sh --type ci-cd
|
|
|
|
# Test production monitoring (if you have a production setup)
|
|
./scripts/monitor.sh --type production
|
|
|
|
# Clean up
|
|
cd /
|
|
rm -rf /tmp/monitoring-setup
|
|
```
|
|
|
|
**Alternative: Create a local copy for convenience**:
|
|
```bash
|
|
# Copy the script to your home directory for easy access
|
|
cp /tmp/monitoring-setup/scripts/monitor.sh ~/monitor.sh
|
|
chmod +x ~/monitor.sh
|
|
|
|
# Test the local copy
|
|
~/monitor.sh --type ci-cd
|
|
```
|
|
|
|
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
|
|
|
|
#### 8.2 Cleanup Script
|
|
|
|
**Important**: The repository includes a pre-configured cleanup script in the `scripts/` directory that can be used for both CI/CD and production cleanup operations.
|
|
|
|
**Repository Script**:
|
|
- `scripts/cleanup.sh` - Comprehensive cleanup script with support for both CI/CD and production environments
|
|
|
|
**To use the repository cleanup script**:
|
|
```bash
|
|
# Clone the repository if not already done
|
|
git clone https://your-forgejo-instance/your-username/APP_NAME.git /tmp/cleanup-setup
|
|
cd /tmp/cleanup-setup
|
|
|
|
# Make the script executable
|
|
chmod +x scripts/cleanup.sh
|
|
|
|
# Test CI/CD cleanup (dry run first)
|
|
./scripts/cleanup.sh --type ci-cd --dry-run
|
|
|
|
# Run CI/CD cleanup
|
|
./scripts/cleanup.sh --type ci-cd
|
|
|
|
# Test production cleanup (dry run first)
|
|
./scripts/cleanup.sh --type production --dry-run
|
|
|
|
# Clean up
|
|
cd /
|
|
rm -rf /tmp/cleanup-setup
|
|
```
|
|
|
|
**Alternative: Create a local copy for convenience**:
|
|
```bash
|
|
# Copy the script to your home directory for easy access
|
|
cp /tmp/cleanup-setup/scripts/cleanup.sh ~/cleanup.sh
|
|
chmod +x ~/cleanup.sh
|
|
|
|
# Test the local copy (dry run)
|
|
~/cleanup.sh --type ci-cd --dry-run
|
|
```
|
|
|
|
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, dry-run mode, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate cleanup operations.
|
|
|
|
#### 8.3 Test Cleanup Script
|
|
|
|
```bash
|
|
# Create some test images to clean up
|
|
docker pull alpine:latest
|
|
docker pull nginx:latest
|
|
docker tag alpine:latest test-cleanup:latest
|
|
docker tag nginx:latest test-cleanup2:latest
|
|
|
|
# Test cleanup with dry run first
|
|
./scripts/cleanup.sh --type ci-cd --dry-run
|
|
|
|
# Run the cleanup script
|
|
./scripts/cleanup.sh --type ci-cd
|
|
|
|
# Verify cleanup worked
|
|
echo "Checking remaining images:"
|
|
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
|
|
|
|
echo "Checking remaining volumes:"
|
|
docker volume ls
|
|
|
|
echo "Checking remaining networks:"
|
|
docker network ls
|
|
```
|
|
|
|
**Expected Output**:
|
|
- Cleanup script should run without errors
|
|
- Test images should be removed
|
|
- System should report cleanup completion
|
|
- Remaining images should be minimal (only actively used ones)
|
|
|
|
**If something goes wrong**:
|
|
- Check script permissions: `ls -la scripts/cleanup.sh`
|
|
- Verify Docker access: `docker ps`
|
|
- Check registry access: `cd /opt/registry && docker-compose ps`
|
|
- Run manually: `bash -x scripts/cleanup.sh`
|
|
|
|
#### 8.4 Set Up Automated Cleanup
|
|
|
|
```bash
|
|
# Create a cron job to run cleanup daily at 3 AM using the repository script
|
|
(crontab -l 2>/dev/null; echo "0 3 * * * cd /tmp/cleanup-setup && ./scripts/cleanup.sh --type ci-cd >> /tmp/cleanup.log 2>&1") | crontab -
|
|
|
|
# Verify the cron job was added
|
|
crontab -l
|
|
```
|
|
|
|
**What this does:**
|
|
- **Runs automatically**: The cleanup script runs every day at 3:00 AM
|
|
- **Frequency**: Daily cleanup to prevent disk space issues
|
|
- **Logging**: All cleanup output is logged to `/tmp/cleanup.log`
|
|
- **What it cleans**: Unused Docker images, volumes, networks, and registry images
|
|
|
|
**Alternative: Use a local copy for automated cleanup**:
|
|
```bash
|
|
# If you created a local copy, use that instead
|
|
(crontab -l 2>/dev/null; echo "0 3 * * * ~/cleanup.sh --type ci-cd >> ~/cleanup.log 2>&1") | crontab -
|
|
```
|
|
|
|
### Step 9: Configure Firewall
|
|
|
|
```bash
|
|
sudo ufw --force enable
|
|
sudo ufw default deny incoming
|
|
sudo ufw default allow outgoing
|
|
sudo ufw allow ssh
|
|
sudo ufw allow 5000/tcp # Docker registry (public read access)
|
|
sudo ufw allow 8080/tcp # Registry UI (public read access)
|
|
```
|
|
|
|
**Security Model**:
|
|
- **Port 5000 (Registry)**: Public read access, authenticated write access
|
|
- **Port 8080 (UI)**: Public read access for browsing images
|
|
- **SSH**: Restricted to your IP addresses
|
|
- **All other ports**: Blocked
|
|
|
|
### Step 10: Test CI/CD Setup
|
|
|
|
#### 10.1 Test Docker Installation
|
|
|
|
```bash
|
|
docker --version
|
|
docker-compose --version
|
|
```
|
|
|
|
#### 10.2 Check Registry Status
|
|
|
|
```bash
|
|
cd /opt/registry
|
|
docker-compose ps
|
|
```
|
|
|
|
#### 10.3 Test Registry Access
|
|
|
|
```bash
|
|
curl http://localhost:5000/v2/_catalog
|
|
```
|
|
|
|
#### 10.4 Get Public Key for Production Server
|
|
|
|
```bash
|
|
cat ~/.ssh/id_ed25519.pub
|
|
```
|
|
|
|
**Important**: Copy this public key - you'll need it for the production server setup.
|
|
|
|
### Step 11: Configure Forgejo Actions Runner
|
|
|
|
#### 11.1 Get Runner Token
|
|
|
|
1. Go to your Forgejo repository
|
|
2. Navigate to Settings → Actions → Runners
|
|
3. Click "New runner"
|
|
4. Copy the registration token
|
|
|
|
#### 11.2 Configure Runner
|
|
|
|
```bash
|
|
# Get the registration token from your Forgejo repository
|
|
# Go to Settings → Actions → Runners → New runner
|
|
# Copy the registration token
|
|
|
|
# Configure the runner
|
|
forgejo-runner register \
|
|
--instance https://your-forgejo-instance \
|
|
--token YOUR_TOKEN \
|
|
--name "ci-cd-runner" \
|
|
--labels "ubuntu-latest,docker" \
|
|
--no-interactive
|
|
```
|
|
|
|
#### 11.3 Start Runner
|
|
|
|
```bash
|
|
sudo systemctl start forgejo-runner.service
|
|
sudo systemctl status forgejo-runner.service
|
|
```
|
|
|
|
#### 11.4 Test Runner Configuration
|
|
|
|
```bash
|
|
# Check if the runner is running
|
|
sudo systemctl status forgejo-runner.service
|
|
|
|
# Check runner logs
|
|
sudo journalctl -u forgejo-runner.service -f --no-pager
|
|
|
|
# Test runner connectivity (in a separate terminal)
|
|
forgejo-runner list
|
|
|
|
# Verify runner appears in Forgejo
|
|
# Go to your Forgejo repository → Settings → Actions → Runners
|
|
# You should see your runner listed as "ci-cd-runner" with status "Online"
|
|
```
|
|
|
|
**Expected Output**:
|
|
- `systemctl status` should show "active (running)"
|
|
- `forgejo-runner list` should show your runner
|
|
- Forgejo web interface should show the runner as online
|
|
|
|
**If something goes wrong**:
|
|
- Check logs: `sudo journalctl -u forgejo-runner.service -f`
|
|
- Verify token: Make sure the registration token is correct
|
|
- Check network: Ensure the runner can reach your Forgejo instance
|
|
- Restart service: `sudo systemctl restart forgejo-runner.service`
|
|
|
|
---
|
|
|
|
## Part 2: Production Linode Setup
|
|
|
|
### Step 12: Initial System Setup
|
|
|
|
#### 12.1 Update the System
|
|
|
|
```bash
|
|
sudo apt update && sudo apt upgrade -y
|
|
```
|
|
|
|
#### 12.2 Configure Timezone
|
|
|
|
```bash
|
|
# Configure timezone interactively
|
|
sudo dpkg-reconfigure tzdata
|
|
|
|
# Verify timezone setting
|
|
date
|
|
```
|
|
|
|
**What this does**: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
|
|
|
|
**Expected output**: After selecting your timezone, the `date` command should show the current date and time in your selected timezone.
|
|
|
|
#### 12.3 Configure /etc/hosts
|
|
|
|
```bash
|
|
# Add localhost entries for both IPv4 and IPv6
|
|
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
|
|
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
|
|
echo "YOUR_PRODUCTION_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
echo "YOUR_PRODUCTION_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
|
|
|
|
# Verify the configuration
|
|
cat /etc/hosts
|
|
```
|
|
|
|
**What this does**:
|
|
- Adds localhost entries for both IPv4 and IPv6 addresses to `/etc/hosts`
|
|
- Ensures proper localhost resolution for both IPv4 and IPv6
|
|
|
|
**Important**: Replace `YOUR_PRODUCTION_IPV4_ADDRESS` and `YOUR_PRODUCTION_IPV6_ADDRESS` with the actual IPv4 and IPv6 addresses of your Production Linode obtained from your Linode dashboard.
|
|
|
|
**Expected output**: The `/etc/hosts` file should show entries for `127.0.0.1`, `::1`, and your Linode's actual IP addresses all mapping to `localhost`.
|
|
|
|
#### 12.4 Install Essential Packages
|
|
|
|
```bash
|
|
sudo apt install -y \
|
|
curl \
|
|
wget \
|
|
git \
|
|
ca-certificates \
|
|
apt-transport-https \
|
|
software-properties-common \
|
|
ufw \
|
|
fail2ban \
|
|
htop \
|
|
nginx \
|
|
certbot \
|
|
python3-certbot-nginx
|
|
```
|
|
|
|
### Step 13: Create Users
|
|
|
|
#### 13.1 Create the SERVICE_USER User
|
|
|
|
```bash
|
|
sudo useradd -r -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
|
|
sudo usermod -aG sudo SERVICE_USER
|
|
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
|
|
```
|
|
|
|
#### 13.2 Verify Users
|
|
|
|
```bash
|
|
sudo su - SERVICE_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
|
|
sudo su - DEPLOY_USER
|
|
whoami
|
|
pwd
|
|
exit
|
|
```
|
|
|
|
### Step 14: Install Docker
|
|
|
|
#### 14.1 Add Docker Repository
|
|
|
|
```bash
|
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
sudo apt update
|
|
```
|
|
|
|
#### 14.2 Install Docker Packages
|
|
|
|
```bash
|
|
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
|
|
```
|
|
|
|
#### 14.3 Configure Docker for Service Account
|
|
|
|
```bash
|
|
sudo usermod -aG docker SERVICE_USER
|
|
```
|
|
|
|
### Step 15: Install Docker Compose
|
|
|
|
```bash
|
|
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
|
sudo chmod +x /usr/local/bin/docker-compose
|
|
```
|
|
|
|
### Step 16: Configure Security
|
|
|
|
#### 16.1 Configure Firewall
|
|
|
|
```bash
|
|
sudo ufw --force enable
|
|
sudo ufw default deny incoming
|
|
sudo ufw default allow outgoing
|
|
sudo ufw allow ssh
|
|
sudo ufw allow 80/tcp
|
|
sudo ufw allow 443/tcp
|
|
sudo ufw allow 3000/tcp
|
|
sudo ufw allow 3001/tcp
|
|
```
|
|
|
|
#### 16.2 Configure Fail2ban
|
|
|
|
```bash
|
|
sudo systemctl enable fail2ban
|
|
sudo systemctl start fail2ban
|
|
```
|
|
|
|
### Step 17: Create Application Directory
|
|
|
|
#### 17.1 Create Directory Structure
|
|
|
|
```bash
|
|
sudo mkdir -p /opt/APP_NAME
|
|
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME
|
|
```
|
|
|
|
**Note**: Replace `APP_NAME` with your actual application name. This directory name can be controlled via the `APP_NAME` secret in your Forgejo repository settings. If you set the `APP_NAME` secret to `myapp`, the deployment directory will be `/opt/myapp`.
|
|
|
|
#### 17.2 Create SSL Directory (Optional - for domain users)
|
|
|
|
```bash
|
|
sudo mkdir -p /opt/APP_NAME/nginx/ssl
|
|
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME/nginx/ssl
|
|
```
|
|
|
|
### Step 18: Clone Repository and Set Up Application Files
|
|
|
|
#### 18.1 Switch to SERVICE_USER User
|
|
|
|
```bash
|
|
sudo su - SERVICE_USER
|
|
```
|
|
|
|
#### 18.2 Clone Repository
|
|
|
|
```bash
|
|
cd /opt/APP_NAME
|
|
git clone https://your-forgejo-instance/your-username/APP_NAME.git .
|
|
```
|
|
|
|
**Important**: The repository includes a pre-configured `nginx/nginx.conf` file that handles both SSL and non-SSL scenarios, with proper security headers, rate limiting, and CORS configuration. This file will be automatically used by the Docker Compose setup.
|
|
|
|
**Important**: The repository also includes a pre-configured `.forgejo/workflows/ci.yml` file that handles the complete CI/CD pipeline including testing, building, and deployment. This workflow is already set up to work with the private registry and production deployment.
|
|
|
|
**Note**: Replace `your-forgejo-instance` and `your-username/APP_NAME` with your actual Forgejo instance URL and repository path.
|
|
|
|
#### 18.3 Create Environment File
|
|
|
|
The repository doesn't include a `.env.example` file for security reasons. The CI/CD pipeline will create the `.env` file dynamically during deployment. However, for manual testing or initial setup, you can create a basic `.env` file:
|
|
|
|
```bash
|
|
cat > /opt/APP_NAME/.env << 'EOF'
|
|
# Production Environment Variables
|
|
POSTGRES_PASSWORD=your_secure_password_here
|
|
REGISTRY=YOUR_CI_CD_IP:5000
|
|
IMAGE_NAME=APP_NAME
|
|
IMAGE_TAG=latest
|
|
|
|
# Database Configuration
|
|
POSTGRES_DB=sharenet
|
|
POSTGRES_USER=sharenet
|
|
DATABASE_URL=postgresql://sharenet:your_secure_password_here@postgres:5432/sharenet
|
|
|
|
# Application Configuration
|
|
NODE_ENV=production
|
|
RUST_LOG=info
|
|
RUST_BACKTRACE=1
|
|
EOF
|
|
```
|
|
|
|
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
|
|
|
|
**Default Environment Variables** (from `docker-compose.yml`):
|
|
- `POSTGRES_DB=sharenet` - PostgreSQL database name
|
|
- `POSTGRES_USER=sharenet` - PostgreSQL username
|
|
- `POSTGRES_PASSWORD=changeme` - PostgreSQL password (should be changed)
|
|
- `REGISTRY=your-username/sharenet` - Docker registry path (used as fallback)
|
|
- `IMAGE_NAME=your-username/sharenet` - Docker image name (used as fallback)
|
|
- `IMAGE_TAG=latest` - Docker image tag (used as fallback)
|
|
|
|
**Note**: The database user and database name can be controlled via the `POSTGRES_USER` and `POSTGRES_DB` secrets in your Forgejo repository settings. If you set these secrets, they will override the default values used in this environment file.
|
|
|
|
**Security Note**: Always change the default `POSTGRES_PASSWORD` from `changeme` to a strong, unique password in production.
|
|
|
|
#### 18.4 Verify Repository Contents
|
|
|
|
```bash
|
|
# Check that the docker-compose.yml file is present
|
|
ls -la docker-compose.yml
|
|
|
|
# Check that the nginx configuration is present
|
|
ls -la nginx/nginx.conf
|
|
|
|
# Check that the CI/CD workflow is present
|
|
ls -la .forgejo/workflows/ci.yml
|
|
|
|
# Check the repository structure
|
|
ls -la
|
|
|
|
# Verify the docker-compose.yml content
|
|
head -20 docker-compose.yml
|
|
|
|
# Verify the nginx configuration content
|
|
head -20 nginx/nginx.conf
|
|
|
|
# Verify the CI/CD workflow content
|
|
head -20 .forgejo/workflows/ci.yml
|
|
```
|
|
|
|
**Expected output**: You should see the `docker-compose.yml` file, `nginx/nginx.conf` file, `.forgejo/workflows/ci.yml` file, and other project files from your repository.
|
|
|
|
#### 18.5 Deployment Scripts
|
|
|
|
**Important**: The repository includes pre-configured deployment scripts in the `scripts/` directory that are used by the CI/CD pipeline. These scripts handle safe production deployments with database migrations, backups, and rollback capabilities.
|
|
|
|
**Repository Scripts** (used by CI/CD pipeline):
|
|
- `scripts/deploy.sh` - Main deployment script with migration support
|
|
- `scripts/deploy-local.sh` - Local development deployment script
|
|
- `scripts/migrate.sh` - Database migration management
|
|
- `scripts/validate_migrations.sh` - Migration validation
|
|
- `scripts/monitor.sh` - Comprehensive monitoring script for both CI/CD and production environments
|
|
- `scripts/cleanup.sh` - Comprehensive cleanup script for both CI/CD and production environments
|
|
- `scripts/backup.sh` - Comprehensive backup script for both CI/CD and production environments
|
|
|
|
**To use the repository deployment scripts**:
|
|
```bash
|
|
# The scripts are already available in the cloned repository
|
|
cd /opt/APP_NAME
|
|
|
|
# Make the scripts executable
|
|
chmod +x scripts/deploy.sh scripts/deploy-local.sh
|
|
|
|
# Test local deployment
|
|
./scripts/deploy-local.sh status
|
|
|
|
# Run local deployment
|
|
./scripts/deploy-local.sh deploy
|
|
|
|
# Test production deployment (dry run)
|
|
./scripts/deploy.sh check
|
|
|
|
# Run production deployment
|
|
./scripts/deploy.sh deploy
|
|
```
|
|
|
|
**Alternative: Create a local copy for convenience**:
|
|
```bash
|
|
# Copy the local deployment script to the application directory for easy access
|
|
cp scripts/deploy-local.sh /opt/APP_NAME/deploy-local.sh
|
|
chmod +x /opt/APP_NAME/deploy-local.sh
|
|
|
|
# Test the local copy
|
|
cd /opt/APP_NAME
|
|
./deploy-local.sh status
|
|
```
|
|
|
|
**Note**: The repository scripts are more comprehensive and include proper error handling, colored output, and multiple commands. The `scripts/deploy.sh` is used by the CI/CD pipeline and includes database migration handling, backup creation, and rollback capabilities. The `scripts/deploy-local.sh` is designed for local development deployments and includes status checking, restart, and log viewing capabilities.
|
|
|
|
#### 18.6 Backup Script
|
|
|
|
**Important**: The repository includes a pre-configured backup script in the `scripts/` directory that can be used for both CI/CD and production backup operations.
|
|
|
|
**Repository Script**:
|
|
- `scripts/backup.sh` - Comprehensive backup script with support for both CI/CD and production environments
|
|
|
|
**To use the repository backup script**:
|
|
```bash
|
|
# The script is already available in the cloned repository
|
|
cd /opt/APP_NAME
|
|
|
|
# Make the script executable
|
|
chmod +x scripts/backup.sh
|
|
|
|
# Test production backup (dry run first)
|
|
./scripts/backup.sh --type production --app-name APP_NAME --dry-run
|
|
|
|
# Run production backup
|
|
./scripts/backup.sh --type production --app-name APP_NAME
|
|
|
|
# Test CI/CD backup (dry run first)
|
|
./scripts/backup.sh --type ci-cd --app-name APP_NAME --dry-run
|
|
```
|
|
|
|
**Alternative: Create a local copy for convenience**:
|
|
```bash
|
|
# Copy the script to the application directory for easy access
|
|
cp scripts/backup.sh /opt/APP_NAME/backup-local.sh
|
|
chmod +x /opt/APP_NAME/backup-local.sh
|
|
|
|
# Test the local copy (dry run)
|
|
cd /opt/APP_NAME
|
|
./backup-local.sh --type production --app-name APP_NAME --dry-run
|
|
```
|
|
|
|
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, dry-run mode, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate backup operations.
|
|
|
|
#### 18.6.1 Set Up Automated Backup Scheduling
|
|
|
|
```bash
|
|
# Create a cron job to run backups daily at 2 AM using the repository script
|
|
(crontab -l 2>/dev/null; echo "0 2 * * * cd /opt/APP_NAME && ./scripts/backup.sh --type production --app-name APP_NAME >> /opt/APP_NAME/backup.log 2>&1") | crontab -
|
|
|
|
# Verify the cron job was added
|
|
crontab -l
|
|
```
|
|
|
|
**What this does:**
|
|
- **Runs automatically**: The backup script runs every day at 2:00 AM
|
|
- **Frequency**: Daily backups to ensure minimal data loss
|
|
- **Logging**: All backup output is logged to `/opt/APP_NAME/backup.log`
|
|
- **Retention**: The script automatically keeps only the last 7 days of backups (configurable)
|
|
|
|
**To test the backup manually:**
|
|
```bash
|
|
cd /opt/APP_NAME
|
|
./scripts/backup.sh --type production --app-name APP_NAME
|
|
```
|
|
|
|
**To view backup logs:**
|
|
```bash
|
|
tail -f /opt/APP_NAME/backup.log
|
|
```
|
|
|
|
**Alternative: Use a local copy for automated backup**:
|
|
```bash
|
|
# If you created a local copy, use that instead
|
|
(crontab -l 2>/dev/null; echo "0 2 * * * cd /opt/APP_NAME && ./backup-local.sh --type production --app-name APP_NAME >> /opt/APP_NAME/backup.log 2>&1") | crontab -
|
|
```
|
|
|
|
#### 18.7 Monitoring Script
|
|
|
|
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for production monitoring.
|
|
|
|
**Repository Script**:
|
|
- `scripts/monitor.sh` - Comprehensive monitoring script with support for both CI/CD and production environments
|
|
|
|
**To use the repository monitoring script**:
|
|
```bash
|
|
# The script is already available in the cloned repository
|
|
cd /opt/APP_NAME
|
|
|
|
# Make the script executable
|
|
chmod +x scripts/monitor.sh
|
|
|
|
# Test production monitoring
|
|
./scripts/monitor.sh --type production --app-name APP_NAME
|
|
```
|
|
|
|
**Alternative: Create a local copy for convenience**:
|
|
```bash
|
|
# Copy the script to the application directory for easy access
|
|
cp scripts/monitor.sh /opt/APP_NAME/monitor-local.sh
|
|
chmod +x /opt/APP_NAME/monitor-local.sh
|
|
|
|
# Test the local copy
|
|
cd /opt/APP_NAME
|
|
./monitor-local.sh --type production --app-name APP_NAME
|
|
```
|
|
|
|
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, health checks, and automatic environment detection. It provides better monitoring information than a simple local script.
|
|
|
|
#### 18.7.1 Set Up Automated Monitoring
|
|
|
|
```bash
|
|
# Create a cron job to run monitoring every 5 minutes using the repository script
|
|
(crontab -l 2>/dev/null; echo "*/5 * * * * cd /opt/APP_NAME && ./scripts/monitor.sh --type production --app-name APP_NAME >> /opt/APP_NAME/monitor.log 2>&1") | crontab -
|
|
|
|
# Verify the cron job was added
|
|
crontab -l
|
|
```
|
|
|
|
**What this does:**
|
|
- **Runs automatically**: The monitoring script runs every 5 minutes
|
|
- **Frequency**: Every 5 minutes to catch issues quickly
|
|
- **Logging**: All monitoring output is logged to `/opt/APP_NAME/monitor.log`
|
|
- **What it monitors**: Container status, recent logs, CPU/memory/disk usage, network connections, health checks
|
|
|
|
**To test the monitoring manually:**
|
|
```bash
|
|
cd /opt/APP_NAME
|
|
./scripts/monitor.sh --type production --app-name APP_NAME
|
|
```
|
|
|
|
**To view monitoring logs:**
|
|
```bash
|
|
tail -f /opt/APP_NAME/monitor.log
|
|
```
|
|
|
|
### Step 19: Set Up SSH for CI/CD Communication
|
|
|
|
#### 19.1 Generate SSH Key Pair
|
|
|
|
```bash
|
|
ssh-keygen -t ed25519 -C "production-server" -f ~/.ssh/id_ed25519 -N ""
|
|
```
|
|
|
|
#### 19.2 Create SSH Config
|
|
|
|
```bash
|
|
cat > ~/.ssh/config << 'EOF'
|
|
Host ci-cd
|
|
HostName YOUR_CI_CD_IP
|
|
User SERVICE_USER
|
|
IdentityFile ~/.ssh/id_ed25519
|
|
StrictHostKeyChecking no
|
|
UserKnownHostsFile /dev/null
|
|
EOF
|
|
|
|
chmod 600 ~/.ssh/config
|
|
```
|
|
|
|
### Step 20: Exchange SSH Keys
|
|
|
|
#### 20.1 Get Your Public Key
|
|
|
|
```bash
|
|
cat ~/.ssh/id_ed25519.pub
|
|
```
|
|
|
|
**Important**: Copy this public key - you'll need it for the CI/CD server setup.
|
|
|
|
#### 20.2 Add CI/CD Server's Public Key
|
|
|
|
```bash
|
|
echo "CI_CD_PUBLIC_KEY_HERE" >> ~/.ssh/authorized_keys
|
|
sed -i 's/YOUR_CI_CD_IP/YOUR_ACTUAL_CI_CD_IP/g' ~/.ssh/config
|
|
```
|
|
|
|
**Note**: Replace `CI_CD_PUBLIC_KEY_HERE` with the actual public key from your CI/CD server.
|
|
|
|
### Step 21: Update Application Configuration for CI/CD
|
|
|
|
#### 21.1 Update Environment Variables
|
|
|
|
```bash
|
|
cd /opt/APP_NAME
|
|
nano .env
|
|
```
|
|
|
|
**Required changes**:
|
|
- Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address
|
|
- Replace `your_secure_password_here` with a strong database password
|
|
- Update `DATABASE_URL` with the same password
|
|
|
|
### Step 22: Configure SSL Certificates (Optional - Domain Users Only)
|
|
|
|
**Skip this step if you don't have a domain name.**
|
|
|
|
#### 22.1 Install SSL Certificates
|
|
|
|
```bash
|
|
sudo certbot --nginx -d your-domain.com
|
|
```
|
|
|
|
#### 22.2 Copy SSL Certificates
|
|
|
|
```bash
|
|
sudo cp /etc/letsencrypt/live/your-domain.com/fullchain.pem /opt/APP_NAME/nginx/ssl/
|
|
sudo cp /etc/letsencrypt/live/your-domain.com/privkey.pem /opt/APP_NAME/nginx/ssl/
|
|
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME/nginx/ssl/*
|
|
```
|
|
|
|
### Step 23: Test CI/CD Integration
|
|
|
|
#### 23.1 Test SSH Connection
|
|
|
|
```bash
|
|
ssh ci-cd 'echo Connection successful'
|
|
```
|
|
|
|
**Expected output**: `Connection successful`.
|
|
|
|
#### 23.2 Test Registry Connection
|
|
|
|
```bash
|
|
curl http://YOUR_ACTUAL_CI_IP:5000/v2/_catalog
|
|
```
|
|
|
|
**Expected output**: `{"repositories":[]}` or a list of available images.
|
|
|
|
---
|
|
|
|
## Part 3: Pipeline Configuration
|
|
|
|
### Step 24: Configure Forgejo Repository Secrets
|
|
|
|
Go to your Forgejo repository → Settings → Secrets and Variables → Actions, and add the following secrets:
|
|
|
|
#### Required Secrets:
|
|
|
|
- **`CI_HOST`**: Your CI/CD Linode IP address
|
|
- **Purpose**: Used by the workflow to connect to your private Docker registry
|
|
- **Example**: `192.168.1.100`
|
|
|
|
- **`PROD_HOST`**: Your Production Linode IP address
|
|
- **Purpose**: Used by the deployment job to SSH into your production server
|
|
- **Example**: `192.168.1.101`
|
|
|
|
- **`PROD_USER`**: SSH username for production server
|
|
- **Purpose**: Username for SSH connection to production server
|
|
- **Value**: Should be `DEPLOY_USER` (the deployment user you created)
|
|
- **Example**: `deploy`
|
|
|
|
- **`PROD_SSH_KEY`**: SSH private key for deployment
|
|
- **Purpose**: Private key for SSH authentication to production server
|
|
- **Source**: Copy the private key from your CI/CD server
|
|
- **How to get**: On CI/CD server, run `cat ~/.ssh/id_ed25519`
|
|
- **Format**: Include the entire key including `-----BEGIN OPENSSH PRIVATE KEY-----` and `-----END OPENSSH PRIVATE KEY-----`
|
|
|
|
#### Optional Secrets (for enhanced security and flexibility):
|
|
|
|
- **`APP_NAME`**: Application name (used for directory, database, and image names)
|
|
- **Purpose**: Controls the application directory name and database names
|
|
- **Default**: `sharenet` (if not set)
|
|
- **Example**: `myapp`, `webapp`, `api`
|
|
- **Note**: This affects the deployment directory `/opt/APP_NAME` and database names
|
|
|
|
- **`POSTGRES_USER`**: PostgreSQL username for the application database
|
|
- **Purpose**: Username for the application's PostgreSQL database
|
|
- **Default**: `sharenet` (if not set)
|
|
- **Example**: `appuser`, `webuser`, `apiuser`
|
|
- **Note**: Should match the user created in the PostgreSQL setup
|
|
|
|
- **`POSTGRES_DB`**: PostgreSQL database name for the application
|
|
- **Purpose**: Name of the application's PostgreSQL database
|
|
- **Default**: `sharenet` (if not set)
|
|
- **Example**: `myapp`, `webapp`, `api`
|
|
- **Note**: Should match the database created in the PostgreSQL setup
|
|
|
|
- **`POSTGRES_PASSWORD`**: Database password for production
|
|
- **Purpose**: Secure database password for production environment
|
|
- **Note**: If not set, the workflow will use a default password
|
|
|
|
- **`REGISTRY_USERNAME`**: Username for Docker registry (if using authentication)
|
|
- **Purpose**: Username for private registry access
|
|
- **Note**: Only needed if your registry requires authentication
|
|
|
|
- **`REGISTRY_PASSWORD`**: Password for Docker registry (if using authentication)
|
|
- **Purpose**: Password for private registry access
|
|
- **Note**: Only needed if your registry requires authentication
|
|
|
|
#### How to Add Secrets:
|
|
|
|
1. Go to your Forgejo repository
|
|
2. Navigate to **Settings** → **Secrets and Variables** → **Actions**
|
|
3. Click **New Secret**
|
|
4. Enter the secret name and value
|
|
5. Click **Add Secret**
|
|
|
|
#### Security Notes:
|
|
|
|
- **Never commit secrets to your repository**
|
|
- **Use strong, unique passwords** for each environment
|
|
- **Rotate secrets regularly** for enhanced security
|
|
- **Limit access** to repository settings to trusted team members only
|
|
|
|
### Step 25: Test the Complete Pipeline
|
|
|
|
#### 25.1 Push Code Changes
|
|
|
|
Make a small change to your code and push to trigger the CI/CD pipeline:
|
|
|
|
```bash
|
|
# In your local repository
|
|
echo "# Test deployment" >> README.md
|
|
git add README.md
|
|
git commit -m "Test CI/CD pipeline"
|
|
git push
|
|
```
|
|
|
|
#### 25.2 Monitor Pipeline
|
|
|
|
1. Go to your Forgejo repository
|
|
2. Navigate to Actions tab
|
|
3. Monitor the workflow execution
|
|
4. Check for any errors or issues
|
|
|
|
#### 25.3 Verify Deployment
|
|
|
|
After successful pipeline execution:
|
|
|
|
```bash
|
|
# Check application status
|
|
cd /opt/APP_NAME
|
|
docker-compose ps
|
|
|
|
# Check application logs
|
|
docker-compose logs
|
|
|
|
# Test application access
|
|
curl -I https://your-domain.com # or http://YOUR_PRODUCTION_IP
|
|
```
|
|
|
|
---
|
|
|
|
## Testing and Verification
|
|
|
|
### Step 26: Test Application Access
|
|
|
|
**If you have a domain:**
|
|
```bash
|
|
# Test HTTP redirect to HTTPS
|
|
curl -I http://your-domain.com
|
|
|
|
# Test HTTPS access
|
|
curl -I https://your-domain.com
|
|
|
|
# Test application health endpoint (checks backend services)
|
|
curl https://your-domain.com/health
|
|
```
|
|
|
|
**If you don't have a domain (IP access only):**
|
|
```bash
|
|
# Test HTTP access via IP
|
|
curl -I http://YOUR_PRODUCTION_IP
|
|
|
|
# Test application health endpoint (checks backend services)
|
|
curl http://YOUR_PRODUCTION_IP/health
|
|
```
|
|
|
|
**Expected health endpoint response:**
|
|
```json
|
|
{
|
|
"status": "healthy",
|
|
"service": "sharenet-api",
|
|
"timestamp": "2024-01-01T12:00:00Z"
|
|
}
|
|
```
|
|
|
|
**Note**: The `/health` endpoint now proxies to the backend service and returns actual service status. If the backend is not running, this endpoint will return an error, making it a true health check for the application.
|
|
|
|
### Step 27: Test Monitoring
|
|
|
|
```bash
|
|
# On CI/CD server
|
|
cd /opt/registry
|
|
./scripts/monitor.sh --type ci-cd
|
|
|
|
# On Production server
|
|
cd /opt/APP_NAME
|
|
./scripts/monitor.sh --type production --app-name APP_NAME
|
|
```
|
|
|
|
### Step 28: Test Registry Access
|
|
|
|
```bash
|
|
# Test registry API
|
|
curl http://YOUR_CI_CD_IP:5000/v2/_catalog
|
|
|
|
# Test registry UI (optional)
|
|
curl -I http://YOUR_CI_CD_IP:8080
|
|
```
|
|
|
|
---
|
|
|
|
## Troubleshooting
|
|
|
|
### Common Issues
|
|
|
|
1. **Docker permission denied**:
|
|
```bash
|
|
sudo usermod -aG docker SERVICE_USER
|
|
newgrp docker
|
|
```
|
|
|
|
2. **SSL certificate issues** (domain users only):
|
|
```bash
|
|
sudo certbot certificates
|
|
sudo certbot renew --dry-run
|
|
```
|
|
|
|
3. **Application not starting**:
|
|
```bash
|
|
cd /opt/APP_NAME
|
|
docker-compose logs
|
|
```
|
|
|
|
4. **SSH connection failed**:
|
|
```bash
|
|
ssh -v ci-cd 'echo test'
|
|
ssh -v production 'echo test'
|
|
```
|
|
|
|
5. **Registry connection failed**:
|
|
```bash
|
|
curl -v http://YOUR_CI_HOST_IP:5000/v2/_catalog
|
|
```
|
|
|
|
6. **Actions runner not starting**:
|
|
```bash
|
|
sudo systemctl status forgejo-runner.service
|
|
sudo journalctl -u forgejo-runner.service -f
|
|
```
|
|
|
|
### Useful Commands
|
|
|
|
- **Check system resources**: `htop`
|
|
- **Check disk space**: `df -h`
|
|
- **Check memory usage**: `free -h`
|
|
- **Check network**: `ip addr show`
|
|
- **Check firewall**: `sudo ufw status`
|
|
- **Check logs**: `sudo journalctl -f`
|
|
|
|
### Security Best Practices
|
|
|
|
1. **Service Account**: Use dedicated `SERVICE_USER` user with limited privileges
|
|
2. **SSH Keys**: Use Ed25519 keys with proper permissions (600/700)
|
|
3. **Firewall**: Configure UFW to allow only necessary ports
|
|
4. **Fail2ban**: Protect against brute force attacks
|
|
5. **SSL/TLS**: Use Let's Encrypt certificates with automatic renewal (domain users only)
|
|
6. **Regular Backups**: Automated daily backups of database and configuration
|
|
7. **Container Isolation**: Applications run in isolated Docker containers
|
|
8. **Security Headers**: Nginx configured with security headers
|
|
9. **Registry Security**: Use secure authentication and HTTPS for registry access
|
|
|
|
### Monitoring and Maintenance
|
|
|
|
#### Daily Monitoring
|
|
|
|
```bash
|
|
# On CI/CD server
|
|
cd /opt/registry
|
|
./scripts/monitor.sh --type ci-cd
|
|
|
|
# On Production server
|
|
cd /opt/APP_NAME
|
|
./scripts/monitor.sh --type production --app-name APP_NAME
|
|
```
|
|
|
|
#### Weekly Maintenance
|
|
|
|
1. **Check disk space**: `df -h`
|
|
2. **Review logs**: `docker-compose logs --tail=100`
|
|
3. **Update system**: `sudo apt update && sudo apt upgrade`
|
|
4. **Test backups**: Verify backup files exist and are recent
|
|
```bash
|
|
# On Production server
|
|
cd /opt/APP_NAME
|
|
./scripts/backup.sh --type production --app-name APP_NAME --dry-run
|
|
|
|
# Check backup directory
|
|
ls -la backups/
|
|
```
|
|
|
|
#### Monthly Maintenance
|
|
|
|
1. **Review security**: Check firewall rules and fail2ban status
|
|
2. **Update certificates**: Ensure SSL certificates are valid (domain users only)
|
|
3. **Clean up old images**: Run the cleanup script to remove unused Docker images
|
|
```bash
|
|
# On CI/CD server
|
|
cd /opt/registry
|
|
./scripts/cleanup.sh --type ci-cd
|
|
|
|
# On Production server
|
|
cd /opt/APP_NAME
|
|
./scripts/cleanup.sh --type production
|
|
```
|
|
4. **Review monitoring**: Check application performance and logs
|
|
5. **Verify registry access**: Test registry connectivity and authentication
|
|
|
|
---
|
|
|
|
## Summary
|
|
|
|
Your complete CI/CD pipeline is now ready! The setup includes:
|
|
|
|
### CI/CD Linode Features
|
|
- **Forgejo Actions runner** for automated builds
|
|
- **Local Docker registry** with web UI for image management
|
|
- **Secure SSH communication** with production server
|
|
- **Monitoring and cleanup** scripts
|
|
- **Firewall protection** for security
|
|
|
|
### Production Linode Features
|
|
- **Docker-based application deployment**
|
|
- **Nginx reverse proxy** with security headers
|
|
- **Automated backups and monitoring** scripts
|
|
- **Firewall and fail2ban protection** for security
|
|
- **Optional SSL/TLS certificate management** (if domain is provided)
|
|
|
|
### Pipeline Features
|
|
- **Automated testing** on every code push
|
|
- **Automated image building** and registry push
|
|
- **Automated deployment** to production
|
|
- **Rollback capability** with image versioning
|
|
- **Health monitoring** and logging
|
|
|
|
### Registry Integration
|
|
- **Private registry** on CI/CD Linode stores all production images
|
|
- **Images available** for manual deployment via `PRODUCTION_LINODE_MANUAL_SETUP.md`
|
|
- **Version control** with git commit SHA tags
|
|
- **Web UI** for image management at `http://YOUR_CI_CD_IP:8080`
|
|
|
|
### Access Methods
|
|
- **Domain users**: Access via `https://your-domain.com`
|
|
- **IP-only users**: Access via `http://YOUR_PRODUCTION_IP`
|
|
- **Registry UI**: Access via `http://YOUR_CI_CD_IP:8080`
|
|
|
|
For ongoing maintenance and troubleshooting, refer to the troubleshooting section and monitoring scripts provided in this guide. |