47 KiB
CI/CD Pipeline Setup Guide
This guide covers setting up a complete Continuous Integration/Continuous Deployment (CI/CD) pipeline with a CI/CD Linode and Production Linode for automated builds, testing, and deployments.
Architecture Overview
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Forgejo Host │ │ CI/CD Linode │ │ Production Linode│
│ (Repository) │ │ (Actions Runner)│ │ (Docker Deploy) │
│ │ │ + Harbor Registry│ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
│ │ │
└─────────── Push ──────┼───────────────────────┘
│
└─── Deploy ────────────┘
Pipeline Flow
- Code Push: Developer pushes code to Forgejo repository
- Automated Testing: CI/CD Linode runs tests on backend and frontend
- Image Building: If tests pass, Docker images are built
- Registry Push: Images are pushed to Harbor registry on CI/CD Linode
- Production Deployment: Production Linode pulls images and deploys
- Health Check: Application is verified and accessible
Prerequisites
- Two Ubuntu 24.04 LTS Linodes with root access
- Basic familiarity with Linux commands and SSH
- Forgejo repository with Actions enabled
- Optional: Domain name for Production Linode (for SSL/TLS)
Quick Start
- Set up CI/CD Linode (Steps 1-13)
- Set up Production Linode (Steps 14-26)
- Configure SSH key exchange (Step 27)
- Set up Forgejo repository secrets (Step 28)
- Test the complete pipeline (Step 29)
What's Included
CI/CD Linode Features
- Forgejo Actions runner for automated builds
- Harbor container registry for image storage
- Harbor web UI for image management
- Built-in vulnerability scanning with Trivy
- Role-based access control and audit logs
- Secure SSH communication with production
Production Linode Features
- Docker-based application deployment
- Optional SSL/TLS certificate management (if domain is provided)
- Nginx reverse proxy with security headers
- Automated backups and monitoring
- Firewall and fail2ban protection
Pipeline Features
- Automated testing on every code push
- Automated image building and registry push
- Automated deployment to production
- Rollback capability with image versioning
- Health monitoring and logging
Security Model and User Separation
This setup uses a principle of least privilege approach with separate users for different purposes:
User Roles
-
Root User
- Purpose: Initial system setup only
- SSH Access: Disabled after setup
- Privileges: Full system access (used only during initial configuration)
-
Deployment User (
DEPLOY_USER
)- Purpose: SSH access, deployment tasks, system administration
- SSH Access: Enabled with key-based authentication
- Privileges: Sudo access for deployment and administrative tasks
- Examples:
deploy
,ci
,admin
-
Service Account (
SERVICE_USER
)- Purpose: Running application services (Docker containers, databases)
- SSH Access: None (no login shell)
- Privileges: No sudo access, minimal system access
- Examples:
appuser
,service
,app
Security Benefits
- No root SSH access: Eliminates the most common attack vector
- Principle of least privilege: Each user has only the access they need
- Separation of concerns: Deployment tasks vs. service execution are separate
- Audit trail: Clear distinction between deployment and service activities
- Reduced attack surface: Service account has minimal privileges
File Permissions
- Application files: Owned by
SERVICE_USER
for security - Docker operations: Run by
DEPLOY_USER
with sudo (deployment only) - Service execution: Run by
SERVICE_USER
(no sudo needed)
Prerequisites and Initial Setup
What's Already Done (Assumptions)
This guide assumes you have already:
- Created two Ubuntu 24.04 LTS Linodes with root access
- Set root passwords for both Linodes
- Have SSH client installed on your local machine
- Have Forgejo repository with Actions enabled
- Optional: Domain name pointing to Production Linode's IP addresses
Step 0: Initial SSH Access and Verification
Before proceeding with the setup, you need to establish initial SSH access to both Linodes.
0.1 Get Your Linode IP Addresses
From your Linode dashboard, note the IP addresses for:
- CI/CD Linode:
YOUR_CI_CD_IP
(IP address only, no domain needed) - Production Linode:
YOUR_PRODUCTION_IP
(IP address for SSH, domain for web access)
0.2 Test Initial SSH Access
Test SSH access to both Linodes:
# Test CI/CD Linode (IP address only)
ssh root@YOUR_CI_CD_IP
# Test Production Linode (IP address only)
ssh root@YOUR_PRODUCTION_IP
Expected output: SSH login prompt asking for root password.
If something goes wrong:
- Verify the IP addresses are correct
- Check that SSH is enabled on the Linodes
- Ensure your local machine can reach the Linodes (no firewall blocking)
0.3 Choose Your Names
Before proceeding, decide on:
-
Service Account Name: Choose a username for the service account (e.g.,
appuser
,deploy
,service
)- Replace
SERVICE_USER
in this guide with your chosen name - This account runs the actual application services
- Replace
-
Deployment User Name: Choose a username for deployment tasks (e.g.,
deploy
,ci
,admin
)- Replace
DEPLOY_USER
in this guide with your chosen name - This account has sudo privileges for deployment tasks
- Replace
-
Application Name: Choose a name for your application (e.g.,
myapp
,webapp
,api
)- Replace
APP_NAME
in this guide with your chosen name
- Replace
-
Domain Name (Optional): If you have a domain, note it for SSL configuration
- Replace
your-domain.com
in this guide with your actual domain
- Replace
Example:
- If you choose
appuser
as service account,deploy
as deployment user, andmyapp
as application name:- Replace all
SERVICE_USER
withappuser
- Replace all
DEPLOY_USER
withdeploy
- Replace all
APP_NAME
withmyapp
- If you have a domain
example.com
, replaceyour-domain.com
withexample.com
- Replace all
Security Model:
- Service Account (
SERVICE_USER
): Runs application services, no sudo access - Deployment User (
DEPLOY_USER
): Handles deployments via SSH, has sudo access - Root: Only used for initial setup, then disabled for SSH access
0.4 Set Up SSH Key Authentication for Local Development
Important: This step should be done on both Linodes to enable secure SSH access from your local development machine.
0.4.1 Generate SSH Key on Your Local Machine
On your local development machine, generate an SSH key pair:
# Generate SSH key pair (if you don't already have one)
ssh-keygen -t ed25519 -C "your-email@example.com" -f ~/.ssh/id_ed25519 -N ""
# Or use existing key if you have one
ls ~/.ssh/id_ed25519.pub
0.4.2 Add Your Public Key to Both Linodes
Copy your public key to both Linodes:
# Copy your public key to CI/CD Linode
ssh-copy-id root@YOUR_CI_CD_IP
# Copy your public key to Production Linode
ssh-copy-id root@YOUR_PRODUCTION_IP
Alternative method (if ssh-copy-id doesn't work):
# Copy your public key content
cat ~/.ssh/id_ed25519.pub
# Then manually add to each server
ssh root@YOUR_CI_CD_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
ssh root@YOUR_PRODUCTION_IP
echo "YOUR_PUBLIC_KEY_CONTENT" >> ~/.ssh/authorized_keys
0.4.3 Test SSH Key Authentication
Test that you can access both servers without passwords:
# Test CI/CD Linode
ssh root@YOUR_CI_CD_IP 'echo "SSH key authentication works for CI/CD"'
# Test Production Linode
ssh root@YOUR_PRODUCTION_IP 'echo "SSH key authentication works for Production"'
Expected output: The echo messages should appear without password prompts.
0.4.4 Create Deployment Users
On both Linodes, create the deployment user with sudo privileges:
# Create deployment user
sudo useradd -m -s /bin/bash DEPLOY_USER
sudo usermod -aG sudo DEPLOY_USER
# Set a secure password (for emergency access only)
echo "DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
# Copy your SSH key to the deployment user
sudo mkdir -p /home/DEPLOY_USER/.ssh
sudo cp ~/.ssh/authorized_keys /home/DEPLOY_USER/.ssh/
sudo chown -R DEPLOY_USER:DEPLOY_USER /home/DEPLOY_USER/.ssh
sudo chmod 700 /home/DEPLOY_USER/.ssh
sudo chmod 600 /home/DEPLOY_USER/.ssh/authorized_keys
# Configure sudo to use SSH key authentication (most secure)
echo "DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/DEPLOY_USER
sudo chmod 440 /etc/sudoers.d/DEPLOY_USER
Security Note: This configuration allows the DEPLOY_USER to use sudo without a password, which is more secure for CI/CD automation since there are no passwords to store or expose. The random password is set for emergency console access only.
0.4.5 Test Sudo Access
Test that the deployment user can use sudo without password prompts:
# Test sudo access
ssh DEPLOY_USER@YOUR_CI_CD_IP 'sudo whoami'
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'sudo whoami'
Expected output: Both commands should return root
without prompting for a password.
0.4.6 Test Deployment User Access
Test that you can access both servers as the deployment user:
# Test CI/CD Linode
ssh DEPLOY_USER@YOUR_CI_CD_IP 'echo "Deployment user SSH access works for CI/CD"'
# Test Production Linode
ssh DEPLOY_USER@YOUR_PRODUCTION_IP 'echo "Deployment user SSH access works for Production"'
Expected output: The echo messages should appear without password prompts.
0.4.7 Create SSH Config for Easy Access
On your local machine, create an SSH config file for easy access:
# Create SSH config
cat > ~/.ssh/config << 'EOF'
Host ci-cd-dev
HostName YOUR_CI_CD_IP
User DEPLOY_USER
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking no
Host production-dev
HostName YOUR_PRODUCTION_IP
User DEPLOY_USER
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking no
EOF
chmod 600 ~/.ssh/config
Now you can access servers easily:
ssh ci-cd-dev
ssh production-dev
Part 1: CI/CD Linode Setup
Step 1: Initial System Setup
1.1 Update the System
sudo apt update && sudo apt upgrade -y
What this does: Updates package lists and upgrades all installed packages to their latest versions.
Expected output: A list of packages being updated, followed by completion messages.
1.2 Configure Timezone
# Configure timezone interactively
sudo dpkg-reconfigure tzdata
# Verify timezone setting
date
What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
Expected output: After selecting your timezone, the date
command should show the current date and time in your selected timezone.
1.3 Configure /etc/hosts
# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_CI_CD_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
# Verify the configuration
cat /etc/hosts
What this does:
- Adds localhost entries for both IPv4 and IPv6 addresses to
/etc/hosts
- Ensures proper localhost resolution for both IPv4 and IPv6
Important: Replace YOUR_CI_CD_IPV4_ADDRESS
and YOUR_CI_CD_IPV6_ADDRESS
with the actual IPv4 and IPv6 addresses of your CI/CD Linode obtained from your Linode dashboard.
Expected output: The /etc/hosts
file should show entries for 127.0.0.1
, ::1
, and your Linode's actual IP addresses all mapping to localhost
.
1.4 Install Essential Packages
sudo apt install -y \
curl \
wget \
git \
build-essential \
pkg-config \
libssl-dev \
ca-certificates \
apt-transport-https \
software-properties-common \
apache2-utils
What this does: Installs development tools, SSL libraries, and utilities needed for Docker and application building.
Step 2: Create Users
2.1 Create Service Account
# Create dedicated group for the service account
sudo groupadd -r SERVICE_USER
# Create service account user with dedicated group
sudo useradd -r -g SERVICE_USER -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
2.2 Verify Users
sudo su - SERVICE_USER
whoami
pwd
exit
sudo su - DEPLOY_USER
whoami
pwd
exit
Step 3: Clone Repository for Registry Configuration
# Switch to DEPLOY_USER (who has sudo access)
sudo su - DEPLOY_USER
# Create application directory and clone repository
sudo mkdir -p /opt/APP_NAME
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME
cd /opt
sudo git clone https://your-forgejo-instance/your-username/APP_NAME.git
sudo chown -R SERVICE_USER:SERVICE_USER APP_NAME/
# Verify the registry folder exists
ls -la /opt/APP_NAME/registry/
Important: Replace your-forgejo-instance
, your-username
, and APP_NAME
with your actual Forgejo instance URL, username, and application name.
What this does:
- DEPLOY_USER creates the directory structure and clones the repository
- SERVICE_USER owns all the files for security
- Registry configuration files are now available at
/opt/APP_NAME/registry/
Step 4: Install Docker
4.1 Add Docker Repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
4.2 Install Docker Packages
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
4.3 Configure Docker for Service Account
sudo usermod -aG docker SERVICE_USER
Step 5: Set Up Harbor Container Registry
5.1 Create Harbor Directory
sudo mkdir -p /opt/registry
sudo chown SERVICE_USER:SERVICE_USER /opt/registry
5.2 Generate SSL Certificates
# Create system SSL directory for Harbor certificates
sudo mkdir -p /etc/ssl/registry
# Get your actual IP address
YOUR_ACTUAL_IP=$(curl -4 -s ifconfig.me)
echo "Your IP address is: $YOUR_ACTUAL_IP"
# Generate self-signed certificate with actual IP in system directory
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/ssl/registry/registry.key -out /etc/ssl/registry/registry.crt -days 365 -nodes -subj "/C=US/ST=State/L=City/O=Organization/CN=$YOUR_ACTUAL_IP"
# Set proper permissions
sudo chmod 600 /etc/ssl/registry/registry.key
sudo chmod 644 /etc/ssl/registry/registry.crt
Important: The certificate is now generated in the system SSL directory /etc/ssl/registry/
with your actual CI/CD Linode IP address automatically.
Note: The permissions are set to:
registry.key
:600
(owner read/write only) - private key must be secureregistry.crt
:644
(owner read/write, group/others read) - certificate can be read by services
5.3 Update Harbor Configuration with Actual IP Address
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cd /opt/APP_NAME/registry
# Get your actual IP address
YOUR_ACTUAL_IP=$(curl -4 -s ifconfig.me)
echo "Your IP address is: $YOUR_ACTUAL_IP"
# Replace placeholder IP addresses in Harbor configuration files
sed -i "s/YOUR_CI_CD_IP/$YOUR_ACTUAL_IP/g" harbor.yml
sed -i "s/YOUR_CI_CD_IP/$YOUR_ACTUAL_IP/g" docker-compose.yml
# Replace placeholder application name in configuration files
sed -i "s/APP_NAME/ACTUAL_APP_NAME/g" docker-compose.yml
# Exit SERVICE_USER shell
exit
Important: This step replaces all instances of YOUR_CI_CD_IP
with your actual CI/CD Linode IP address and all instances of APP_NAME
with the actual application name in the Harbor configuration files.
5.4 Set Harbor Environment Variables
# Set environment variables for Harbor
export HARBOR_HOSTNAME=$YOUR_ACTUAL_IP
export HARBOR_ADMIN_PASSWORD="Harbor12345"
export DB_PASSWORD="your-db-password"
# Update Harbor configuration with secure passwords
cd /opt/APP_NAME/registry
sed -i "s/Harbor12345/$HARBOR_ADMIN_PASSWORD/g" harbor.yml
sed -i "s/your-db-password/$DB_PASSWORD/g" harbor.yml
sed -i "s/your-db-password/$DB_PASSWORD/g" docker-compose.yml
Important: Change the default passwords for production use. The default admin password is Harbor12345
- change this immediately after first login.
5.5 Start Harbor
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cd /opt/APP_NAME/registry
docker compose up -d
# Exit SERVICE_USER shell
exit
Important: Harbor startup can take 2-3 minutes as it initializes the database and downloads vulnerability databases. The health check will ensure all services are running properly.
5.6 Wait for Harbor Startup
# Monitor Harbor startup progress
cd /opt/APP_NAME/registry
docker compose logs -f
Expected output: You should see logs from all Harbor services (core, database, redis, registry, portal, nginx, jobservice, trivy) starting up. Wait until you see "Harbor has been installed and started successfully" or similar success messages.
5.7 Test Harbor Setup
# Check if all Harbor containers are running
cd /opt/APP_NAME/registry
docker compose ps
# Test Harbor API (HTTPS)
curl -k https://localhost:8080/api/v2.0/health
# Test Harbor UI (HTTPS)
curl -k -I https://localhost:8080
# Expected output: HTTP/1.1 200 OK
Important: All Harbor services should show as "Up" in the docker compose ps
output. The health check should return a JSON response indicating all services are healthy.
5.8 Access Harbor Web UI
- Open your browser and navigate to:
https://YOUR_CI_CD_IP:8080
- Login with default credentials:
- Username:
admin
- Password:
Harbor12345
(or your configured password)
- Username:
- Change the admin password when prompted (required on first login)
5.9 Configure Harbor for Public Read, Authenticated Write
-
Create a Public Project:
- Go to Projects → New Project
- Set Project Name:
public
- Set Access Level:
Public
- Click OK
-
Create a Private Project (for authenticated writes):
- Go to Projects → New Project
- Set Project Name:
private
- Set Access Level:
Private
- Click OK
-
Create a User for CI/CD:
- Go to Administration → Users → New User
- Set Username:
ci-user
- Set Email:
ci@example.com
- Set Password:
your-secure-password
- Set Role:
Developer
- Click OK
5.10 Test Harbor Authentication and Access Model
# Test Docker login to Harbor
docker login YOUR_CI_CD_IP:8080
# Enter: ci-user and your-secure-password
# Create a test image
echo "FROM alpine:latest" > /tmp/test.Dockerfile
echo "RUN echo 'Hello from Harbor test image'" >> /tmp/test.Dockerfile
# Build and tag test image for public project
docker build -f /tmp/test.Dockerfile -t YOUR_CI_CD_IP:8080/public/test:latest /tmp
# Push to Harbor (requires authentication)
docker push YOUR_CI_CD_IP:8080/public/test:latest
# Verify image is in Harbor
curl -k https://localhost:8080/v2/_catalog
# Test public pull (no authentication required)
docker logout YOUR_CI_CD_IP:8080
docker pull YOUR_CI_CD_IP:8080/public/test:latest
# Clean up test image
docker rmi YOUR_CI_CD_IP:8080/public/test:latest
Expected behavior:
- ✅ Push requires authentication:
docker push
only works when logged in - ✅ Pull works without authentication:
docker pull
works without login for public projects - ✅ Web UI accessible: Harbor UI is available at
https://YOUR_CI_CD_IP:8080
5.11 Harbor Access Model Summary
Your Harbor registry is now configured with the following access model:
Public Projects (like public
):
- ✅ Pull (read): No authentication required
- ✅ Push (write): Requires authentication
- ✅ Web UI: Accessible to view images
Private Projects (like private
):
- ✅ Pull (read): Requires authentication
- ✅ Push (write): Requires authentication
- ✅ Web UI: Requires authentication
Security Features:
- ✅ Vulnerability scanning: Automatic CVE scanning with Trivy
- ✅ Role-based access control: Different user roles (admin, developer, guest)
- ✅ Audit logs: Complete trail of all operations
- ✅ Image signing: Content trust features available
Step 6: Configure Docker for Harbor Access
6.1 Configure Docker for Harbor Access
# Copy the certificate to Docker's trusted certificates
sudo cp /etc/ssl/registry/registry.crt /usr/local/share/ca-certificates/registry.crt
sudo update-ca-certificates
# Configure Docker to trust Harbor registry
sudo tee /etc/docker/daemon.json << EOF
{
"insecure-registries": ["YOUR_CI_CD_IP:8080"],
"registry-mirrors": []
}
EOF
Important: Replace YOUR_CI_CD_IP
with your actual CI/CD Linode IP address.
6.2 Restart Docker
sudo systemctl restart docker
Harbor Access Model
Your Harbor registry is now configured with the following access model:
Public Read Access
Anyone can pull images from public projects without authentication:
# From any machine (public access to public projects)
docker pull YOUR_CI_CD_IP:8080/public/backend:latest
docker pull YOUR_CI_CD_IP:8080/public/frontend:latest
Authenticated Write Access
Only authenticated users can push images:
# Login to Harbor first
docker login YOUR_CI_CD_IP:8080
# Enter: ci-user and your-secure-password
# Then push to Harbor
docker push YOUR_CI_CD_IP:8080/public/backend:latest
docker push YOUR_CI_CD_IP:8080/public/frontend:latest
Harbor Web UI Access
Modern web interface for managing images:
https://YOUR_CI_CD_IP:8080
Client Configuration
For other machines to pull images from public projects, they only need:
# Add to /etc/docker/daemon.json on client machines
{
"insecure-registries": ["YOUR_CI_CD_IP:8080"]
}
# No authentication needed for pulls from public projects
CI/CD Pipeline Configuration
For automated deployments, use the ci-user
credentials:
# In CI/CD pipeline
echo "ci-user:your-secure-password" | docker login YOUR_CI_CD_IP:8080 --username ci-user --password-stdin
docker push YOUR_CI_CD_IP:8080/public/backend:latest
Step 7: Set Up SSH for Production Communication
7.1 Generate SSH Key Pair
ssh-keygen -t ed25519 -C "ci-cd-server" -f ~/.ssh/id_ed25519 -N ""
7.2 Create SSH Config
cat > ~/.ssh/config << 'EOF'
Host production
HostName YOUR_PRODUCTION_IP
User DEPLOY_USER
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOF
chmod 600 ~/.ssh/config
Step 8: Install Forgejo Actions Runner
8.1 Download Runner
cd ~
wget https://code.forgejo.org/forgejo/runner/releases/download/v0.2.11/forgejo-runner-0.2.11-linux-amd64
chmod +x forgejo-runner-0.2.11-linux-amd64
sudo mv forgejo-runner-0.2.11-linux-amd64 /usr/local/bin/forgejo-runner
8.2 Create Systemd Service
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
[Unit]
Description=Forgejo Actions Runner
After=network.target
[Service]
Type=simple
User=SERVICE_USER
WorkingDirectory=/home/SERVICE_USER
ExecStart=/usr/local/bin/forgejo-runner daemon
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
8.3 Enable Service
sudo systemctl daemon-reload
sudo systemctl enable forgejo-runner.service
8.4 Test Runner Configuration
# Check if the runner is running
sudo systemctl status forgejo-runner.service
# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager
# Test runner connectivity (in a separate terminal)
forgejo-runner list
# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-cd-runner" with status "Online"
Expected Output:
systemctl status
should show "active (running)"forgejo-runner list
should show your runner- Forgejo web interface should show the runner as online
If something goes wrong:
- Check logs:
sudo journalctl -u forgejo-runner.service -f
- Verify token: Make sure the registration token is correct
- Check network: Ensure the runner can reach your Forgejo instance
- Restart service:
sudo systemctl restart forgejo-runner.service
Step 9: Set Up Monitoring and Cleanup
9.1 Monitoring Script
Important: The repository includes a pre-configured monitoring script in the scripts/
directory that can be used for both CI/CD and production monitoring.
Repository Script:
scripts/monitor.sh
- Comprehensive monitoring script with support for both CI/CD and production environments
To use the repository monitoring script:
# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME
# Make the script executable
chmod +x scripts/monitor.sh
# Test CI/CD monitoring
./scripts/monitor.sh --type ci-cd
# Test production monitoring (if you have a production setup)
./scripts/monitor.sh --type production
Note: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
9.2 Cleanup Script
Important: The repository includes a pre-configured cleanup script in the scripts/
directory that can be used for both CI/CD and production cleanup operations.
Repository Script:
scripts/cleanup.sh
- Comprehensive cleanup script with support for both CI/CD and production environments
To use the repository cleanup script:
# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME
# Make the script executable
chmod +x scripts/cleanup.sh
# Test CI/CD cleanup (dry run first)
./scripts/cleanup.sh --type ci-cd --dry-run
# Run CI/CD cleanup
./scripts/cleanup.sh --type ci-cd
# Test production cleanup (dry run first)
./scripts/cleanup.sh --type production --dry-run
Note: The repository script is more comprehensive and includes proper error handling, colored output, dry-run mode, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate cleanup operations.
9.3 Test Cleanup Script
# Create some test images to clean up
docker pull alpine:latest
docker pull nginx:latest
docker tag alpine:latest test-cleanup:latest
docker tag nginx:latest test-cleanup2:latest
# Test cleanup with dry run first
./scripts/cleanup.sh --type ci-cd --dry-run
# Run the cleanup script
./scripts/cleanup.sh --type ci-cd
# Verify cleanup worked
echo "Checking remaining images:"
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
echo "Checking remaining volumes:"
docker volume ls
echo "Checking remaining networks:"
docker network ls
Expected Output:
- Cleanup script should run without errors
- Test images should be removed
- System should report cleanup completion
- Remaining images should be minimal (only actively used ones)
If something goes wrong:
- Check script permissions:
ls -la scripts/cleanup.sh
- Verify Docker access:
docker ps
- Check registry access:
cd /opt/APP_NAME/registry && docker compose ps
- Run manually:
bash -x scripts/cleanup.sh
9.4 Set Up Automated Cleanup
# Create a cron job to run cleanup daily at 3 AM using the repository script
(crontab -l 2>/dev/null; echo "0 3 * * * cd /opt/APP_NAME && ./scripts/cleanup.sh --type ci-cd >> /tmp/cleanup.log 2>&1") | crontab -
# Verify the cron job was added
crontab -l
What this does:
- Runs automatically: The cleanup script runs every day at 3:00 AM
- Frequency: Daily cleanup to prevent disk space issues
- Logging: All cleanup output is logged to
/tmp/cleanup.log
- What it cleans: Unused Docker images, volumes, networks, and Harbor images
Step 10: Configure Firewall
sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 8080/tcp # Harbor registry (public read access)
Security Model:
- Port 8080 (Harbor): Public read access for public projects, authenticated write access
- SSH: Restricted to your IP addresses
- All other ports: Blocked
Step 11: Test CI/CD Setup
11.1 Test Docker Installation
docker --version
docker compose --version
11.2 Check Harbor Status
cd /opt/APP_NAME/registry
docker compose ps
11.3 Test Harbor Access
# Test Harbor API
curl -k https://localhost:8080/api/v2.0/health
# Test Harbor UI
curl -k -I https://localhost:8080
11.4 Get Public Key for Production Server
cat ~/.ssh/id_ed25519.pub
Important: Copy this public key - you'll need it for the production server setup.
Part 2: Production Linode Setup
Step 12: Initial System Setup
12.1 Update the System
sudo apt update && sudo apt upgrade -y
12.2 Configure Timezone
# Configure timezone interactively
sudo dpkg-reconfigure tzdata
# Verify timezone setting
date
What this does: Opens an interactive dialog to select your timezone. Navigate through the menus to choose your preferred timezone (e.g., UTC, America/New_York, Europe/London, Asia/Tokyo).
Expected output: After selecting your timezone, the date
command should show the current date and time in your selected timezone.
12.3 Configure /etc/hosts
# Add localhost entries for both IPv4 and IPv6
echo "127.0.0.1 localhost" | sudo tee -a /etc/hosts
echo "::1 localhost ip6-localhost ip6-loopback" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV4_ADDRESS localhost" | sudo tee -a /etc/hosts
echo "YOUR_PRODUCTION_IPV6_ADDRESS localhost" | sudo tee -a /etc/hosts
# Verify the configuration
cat /etc/hosts
What this does:
- Adds localhost entries for both IPv4 and IPv6 addresses to
/etc/hosts
- Ensures proper localhost resolution for both IPv4 and IPv6
Important: Replace YOUR_PRODUCTION_IPV4_ADDRESS
and YOUR_PRODUCTION_IPV6_ADDRESS
with the actual IPv4 and IPv6 addresses of your Production Linode obtained from your Linode dashboard.
Expected output: The /etc/hosts
file should show entries for 127.0.0.1
, ::1
, and your Linode's actual IP addresses all mapping to localhost
.
12.4 Install Essential Packages
sudo apt install -y \
curl \
wget \
git \
ca-certificates \
apt-transport-https \
software-properties-common \
ufw \
fail2ban \
htop \
nginx \
certbot \
python3-certbot-nginx
Step 13: Create Users
13.1 Create the SERVICE_USER User
# Create dedicated group for the service account
sudo groupadd -r SERVICE_USER
# Create service account user with dedicated group
sudo useradd -r -g SERVICE_USER -s /bin/bash -m -d /home/SERVICE_USER SERVICE_USER
echo "SERVICE_USER:$(openssl rand -base64 32)" | sudo chpasswd
13.2 Create the DEPLOY_USER User
# Create deployment user
sudo useradd -m -s /bin/bash DEPLOY_USER
sudo usermod -aG sudo DEPLOY_USER
echo "DEPLOY_USER:$(openssl rand -base64 32)" | sudo chpasswd
13.3 Verify Users
sudo su - SERVICE_USER
whoami
pwd
exit
sudo su - DEPLOY_USER
whoami
pwd
exit
Step 14: Install Docker
14.1 Add Docker Repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
14.2 Install Docker Packages
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
14.3 Configure Docker for Service Account
sudo usermod -aG docker SERVICE_USER
Step 15: Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Step 16: Configure Security
16.1 Configure Firewall
sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 3000/tcp
sudo ufw allow 3001/tcp
16.2 Configure Fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
Step 17: Create Application Directory
17.1 Create Directory Structure
sudo mkdir -p /opt/APP_NAME
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME
Note: Replace APP_NAME
with your actual application name. This directory name can be controlled via the APP_NAME
secret in your Forgejo repository settings. If you set the APP_NAME
secret to myapp
, the deployment directory will be /opt/myapp
.
17.2 Create SSL Directory (Optional - for domain users)
sudo mkdir -p /opt/APP_NAME/nginx/ssl
sudo chown SERVICE_USER:SERVICE_USER /opt/APP_NAME/nginx/ssl
Step 18: Clone Repository and Set Up Application Files
18.1 Switch to SERVICE_USER User
sudo su - SERVICE_USER
18.2 Clone Repository
cd /opt/APP_NAME
git clone https://your-forgejo-instance/your-username/APP_NAME.git .
Important: The repository includes a pre-configured nginx/nginx.conf
file that handles both SSL and non-SSL scenarios, with proper security headers, rate limiting, and CORS configuration. This file will be automatically used by the Docker Compose setup.
Important: The repository also includes a pre-configured .forgejo/workflows/ci.yml
file that handles the complete CI/CD pipeline including testing, building, and deployment. This workflow is already set up to work with the private registry and production deployment.
Note: Replace your-forgejo-instance
and your-username/APP_NAME
with your actual Forgejo instance URL and repository path.
18.3 Create Environment File
The repository doesn't include a .env.example
file for security reasons. The CI/CD pipeline will create the .env
file dynamically during deployment. However, for manual testing or initial setup, you can create a basic .env
file:
cat > /opt/APP_NAME/.env << 'EOF'
# Production Environment Variables
POSTGRES_PASSWORD=your_secure_password_here
REGISTRY=YOUR_CI_CD_IP:8080
IMAGE_NAME=APP_NAME
IMAGE_TAG=latest
# Database Configuration
POSTGRES_DB=sharenet
POSTGRES_USER=sharenet
DATABASE_URL=postgresql://sharenet:your_secure_password_here@postgres:5432/sharenet
# Application Configuration
NODE_ENV=production
RUST_LOG=info
RUST_BACKTRACE=1
EOF
Important: Replace YOUR_CI_CD_IP
with your actual CI/CD Linode IP address and your_secure_password_here
with a strong password.
18.4 Configure Docker for Harbor Access
# Add the CI/CD Harbor registry to Docker's insecure registries
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json << EOF
{
"insecure-registries": ["YOUR_CI_CD_IP:8080"]
}
EOF
# Restart Docker to apply changes
sudo systemctl restart docker
Important: Replace YOUR_CI_CD_IP
with your actual CI/CD Linode IP address.
Step 19: Set Up SSH Key Authentication
19.1 Add CI/CD Public Key
# Create .ssh directory for SERVICE_USER
mkdir -p ~/.ssh
chmod 700 ~/.ssh
# Add the CI/CD public key (copy from CI/CD Linode)
echo "YOUR_CI_CD_PUBLIC_KEY" >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
Important: Replace YOUR_CI_CD_PUBLIC_KEY
with the public key from the CI/CD Linode (the output from cat ~/.ssh/id_ed25519.pub
on the CI/CD Linode).
19.2 Test SSH Connection
From the CI/CD Linode, test the SSH connection:
ssh production
Expected output: You should be able to SSH to the production server without a password prompt.
Step 20: Test Production Setup
20.1 Test Docker Installation
docker --version
docker compose --version
20.2 Test Harbor Access
# Test pulling an image from the CI/CD Harbor registry
docker pull YOUR_CI_CD_IP:8080/public/backend:latest
Important: Replace YOUR_CI_CD_IP
with your actual CI/CD Linode IP address.
20.3 Test Application Deployment
cd /opt/APP_NAME
docker compose up -d
20.4 Verify Application Status
docker compose ps
curl http://localhost:3000
curl http://localhost:3001/health
Expected Output:
- All containers should be running
- Frontend should be accessible on port 3000
- Backend health check should return 200 OK
Part 3: Final Configuration and Testing
Step 21: Configure Forgejo Repository Secrets
21.1 Required Repository Secrets
Go to your Forgejo repository and add these secrets in Settings → Secrets and Variables → Actions:
Required Secrets:
CI_CD_IP
: Your CI/CD Linode IP addressPRODUCTION_IP
: Your Production Linode IP addressDEPLOY_USER
: The deployment user name (e.g.,deploy
,ci
,admin
)SERVICE_USER
: The service user name (e.g.,appuser
,service
,app
)APP_NAME
: Your application name (e.g.,sharenet
,myapp
)POSTGRES_PASSWORD
: A strong password for the PostgreSQL database
Optional Secrets (for domain users):
DOMAIN
: Your domain name (e.g.,example.com
)EMAIL
: Your email for SSL certificate notifications
21.2 Configure Forgejo Actions Runner
21.2.1 Get Runner Token
- Go to your Forgejo repository
- Navigate to Settings → Actions → Runners
- Click "New runner"
- Copy the registration token
21.2.2 Configure Runner
# Switch to DEPLOY_USER on CI/CD Linode
sudo su - DEPLOY_USER
# Get the registration token from your Forgejo repository
# Go to Settings → Actions → Runners → New runner
# Copy the registration token
# Configure the runner
forgejo-runner register \
--instance https://your-forgejo-instance \
--token YOUR_TOKEN \
--name "ci-cd-runner" \
--labels "ubuntu-latest,docker" \
--no-interactive
21.2.3 Start Runner
sudo systemctl start forgejo-runner.service
sudo systemctl status forgejo-runner.service
21.2.4 Test Runner Configuration
# Check if the runner is running
sudo systemctl status forgejo-runner.service
# Check runner logs
sudo journalctl -u forgejo-runner.service -f --no-pager
# Test runner connectivity (in a separate terminal)
forgejo-runner list
# Verify runner appears in Forgejo
# Go to your Forgejo repository → Settings → Actions → Runners
# You should see your runner listed as "ci-cd-runner" with status "Online"
Expected Output:
systemctl status
should show "active (running)"forgejo-runner list
should show your runner- Forgejo web interface should show the runner as online
If something goes wrong:
- Check logs:
sudo journalctl -u forgejo-runner.service -f
- Verify token: Make sure the registration token is correct
- Check network: Ensure the runner can reach your Forgejo instance
- Restart service:
sudo systemctl restart forgejo-runner.service
Step 22: Set Up Monitoring and Cleanup
22.1 Monitoring Script
Important: The repository includes a pre-configured monitoring script in the scripts/
directory that can be used for both CI/CD and production monitoring.
Repository Script:
scripts/monitor.sh
- Comprehensive monitoring script with support for both CI/CD and production environments
To use the repository monitoring script:
# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME
# Make the script executable
chmod +x scripts/monitor.sh
# Test CI/CD monitoring
./scripts/monitor.sh --type ci-cd
# Test production monitoring (if you have a production setup)
./scripts/monitor.sh --type production
Note: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
22.2 Cleanup Script
Important: The repository includes a pre-configured cleanup script in the scripts/
directory that can be used for both CI/CD and production cleanup operations.
Repository Script:
scripts/cleanup.sh
- Comprehensive cleanup script with support for both CI/CD and production environments
To use the repository cleanup script:
# The repository is already cloned at /opt/APP_NAME/
cd /opt/APP_NAME
# Make the script executable
chmod +x scripts/cleanup.sh
# Test CI/CD cleanup (dry run first)
./scripts/cleanup.sh --type ci-cd --dry-run
# Run CI/CD cleanup
./scripts/cleanup.sh --type ci-cd
# Test production cleanup (dry run first)
./scripts/cleanup.sh --type production --dry-run
Note: The repository script is more comprehensive and includes proper error handling, colored output, dry-run mode, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate cleanup operations.
22.3 Test Cleanup Script
# Create some test images to clean up
docker pull alpine:latest
docker pull nginx:latest
docker tag alpine:latest test-cleanup:latest
docker tag nginx:latest test-cleanup2:latest
# Test cleanup with dry run first
./scripts/cleanup.sh --type ci-cd --dry-run
# Run the cleanup script
./scripts/cleanup.sh --type ci-cd
# Verify cleanup worked
echo "Checking remaining images:"
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
echo "Checking remaining volumes:"
docker volume ls
echo "Checking remaining networks:"
docker network ls
Expected Output:
- Cleanup script should run without errors
- Test images should be removed
- System should report cleanup completion
- Remaining images should be minimal (only actively used ones)
If something goes wrong:
- Check script permissions:
ls -la scripts/cleanup.sh
- Verify Docker access:
docker ps
- Check registry access:
cd /opt/APP_NAME/registry && docker compose ps
- Run manually:
bash -x scripts/cleanup.sh
22.4 Set Up Automated Cleanup
# Create a cron job to run cleanup daily at 3 AM using the repository script
(crontab -l 2>/dev/null; echo "0 3 * * * cd /opt/APP_NAME && ./scripts/cleanup.sh --type ci-cd >> /tmp/cleanup.log 2>&1") | crontab -
# Verify the cron job was added
crontab -l
What this does:
- Runs automatically: The cleanup script runs every day at 3:00 AM
- Frequency: Daily cleanup to prevent disk space issues
- Logging: All cleanup output is logged to
/tmp/cleanup.log
- What it cleans: Unused Docker images, volumes, networks, and Harbor images
Step 23: Test Complete Pipeline
23.1 Trigger a Test Build
- Make a small change to your repository (e.g., update a comment or add a test file)
- Commit and push the changes to trigger the CI/CD pipeline
- Monitor the build in your Forgejo repository → Actions tab
23.2 Verify Pipeline Steps
The pipeline should execute these steps in order:
- Checkout: Clone the repository
- Test Backend: Run backend tests
- Test Frontend: Run frontend tests
- Build Backend: Build backend Docker image
- Build Frontend: Build frontend Docker image
- Push to Registry: Push images to your private registry
- Deploy to Production: Deploy to production server
23.3 Check Harbor
# On CI/CD Linode
cd /opt/APP_NAME/registry
# Check if new images were pushed
curl -k https://localhost:8080/v2/_catalog
# Check specific repository tags
curl -k https://localhost:8080/v2/public/backend/tags/list
curl -k https://localhost:8080/v2/public/frontend/tags/list
23.4 Verify Production Deployment
# On Production Linode
cd /opt/APP_NAME
# Check if containers are running with new images
docker compose ps
# Check application health
curl http://localhost:3000
curl http://localhost:3001/health
# Check container logs for any errors
docker compose logs backend
docker compose logs frontend
23.5 Test Application Functionality
- Frontend: Visit your production URL (IP or domain)
- Backend API: Test API endpoints
- Database: Verify database connections
- Logs: Check for any errors in application logs
Step 24: Set Up SSL/TLS (Optional - Domain Users)
24.1 Install SSL Certificate
If you have a domain pointing to your Production Linode:
# On Production Linode
sudo certbot --nginx -d your-domain.com
# Verify certificate
sudo certbot certificates
24.2 Configure Auto-Renewal
# Test auto-renewal
sudo certbot renew --dry-run
# Add to crontab for automatic renewal
sudo crontab -e
# Add this line:
# 0 12 * * * /usr/bin/certbot renew --quiet
Step 25: Final Verification
25.1 Security Check
# Check firewall status
sudo ufw status
# Check fail2ban status
sudo systemctl status fail2ban
# Check SSH access (should be key-based only)
sudo grep "PasswordAuthentication" /etc/ssh/sshd_config
25.2 Performance Check
# Check system resources
htop
# Check disk usage
df -h
# Check Docker disk usage
docker system df
25.3 Backup Verification
# Test backup script
cd /opt/APP_NAME
./scripts/backup.sh --dry-run
# Run actual backup
./scripts/backup.sh
Step 26: Documentation and Maintenance
26.1 Update Documentation
- Update README.md with deployment information
- Document environment variables and their purposes
- Create troubleshooting guide for common issues
- Document backup and restore procedures
26.2 Set Up Monitoring Alerts
# Set up monitoring cron job
(crontab -l 2>/dev/null; echo "*/5 * * * * cd /opt/APP_NAME && ./scripts/monitor.sh --type production >> /tmp/monitor.log 2>&1") | crontab -
# Check monitoring logs
tail -f /tmp/monitor.log
26.3 Regular Maintenance Tasks
Daily:
- Check application logs for errors
- Monitor system resources
- Verify backup completion
Weekly:
- Review security logs
- Update system packages
- Test backup restoration
Monthly:
- Review and rotate logs
- Update SSL certificates
- Review and update documentation
🎉 Congratulations!
You have successfully set up a complete CI/CD pipeline with:
- ✅ Automated testing on every code push
- ✅ Docker image building and Harbor registry storage
- ✅ Automated deployment to production
- ✅ Health monitoring and logging
- ✅ Backup and cleanup automation
- ✅ Security hardening with proper user separation
- ✅ SSL/TLS support for production (optional)
Your application is now ready for continuous deployment with proper security, monitoring, and maintenance procedures in place!