Updated registry config.yml and nginx.conf
Some checks are pending
CI/CD Pipeline / Test Backend (push) Waiting to run
CI/CD Pipeline / Test Frontend (push) Waiting to run
CI/CD Pipeline / Build and Push Docker Images (push) Blocked by required conditions
CI/CD Pipeline / Deploy to Production (push) Blocked by required conditions

This commit is contained in:
continuist 2025-06-28 16:55:07 -04:00
parent 71e8168821
commit 61f5ea7713

View file

@ -457,8 +457,8 @@ storage:
http:
addr: :5000
tls:
certificate: /opt/registry/ssl/registry.crt
key: /opt/registry/ssl/registry.key
certificate: /etc/docker/registry/ssl/registry.crt
key: /etc/docker/registry/ssl/registry.key
headers:
X-Content-Type-Options: [nosniff]
X-Frame-Options: [DENY]
@ -550,11 +550,17 @@ services:
volumes:
- ./config.yml:/etc/docker/registry/config.yml:ro
- ./auth/auth.htpasswd:/etc/docker/registry/auth/auth.htpasswd:ro
- ./ssl:/etc/docker/registry/ssl:ro
- ./ssl/registry.crt:/etc/docker/registry/ssl/registry.crt:ro
- ./ssl/registry.key:/etc/docker/registry/ssl/registry.key:ro
- registry_data:/var/lib/registry
restart: unless-stopped
networks:
- registry_network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/v2/_catalog"]
interval: 30s
timeout: 10s
retries: 3
registry-ui:
image: joxit/docker-registry-ui:latest
@ -562,9 +568,10 @@ services:
- "80"
environment:
- REGISTRY_TITLE=APP_NAME Registry
- REGISTRY_URL=https://YOUR_CI_CD_IP:5000
- REGISTRY_URL=https://YOUR_CI_CD_IP:8080
depends_on:
- registry
registry:
condition: service_healthy
restart: unless-stopped
networks:
- registry_network
@ -596,7 +603,7 @@ exit
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address in the `REGISTRY_URL` environment variable.
**Note**: We added an nginx reverse proxy to handle HTTPS for the registry UI.
**Note**: We updated the volume mounts to explicitly map individual certificate files to their expected locations in the registry container.
#### 4.4.1 Create Nginx Configuration
@ -614,6 +621,10 @@ http {
server registry-ui:80;
}
upstream registry_api {
server registry:5000;
}
server {
listen 443 ssl;
server_name YOUR_CI_CD_IP;
@ -623,12 +634,28 @@ http {
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
# Proxy registry API requests
location /v2/ {
proxy_pass http://registry_api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
# Proxy registry UI requests
location / {
proxy_pass http://registry_ui;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
}
}
@ -660,6 +687,155 @@ docker compose up -d
exit
```
#### 4.6.1 Restart Registry with Updated Configuration
If you've already started the registry and then updated the `REGISTRY_URL` in the docker-compose.yml file, you need to restart the containers for the changes to take effect:
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cd /opt/registry
# Stop and remove the existing containers
docker compose down
# Start the containers with the updated configuration
docker compose up -d
# Exit SERVICE_USER shell
exit
```
**Note**: This step is only needed if you've already started the registry and then updated the `REGISTRY_URL`. If you're starting fresh, Step 4.6 is sufficient.
#### 4.6.2 Troubleshoot Connection Issues
If you get "Unable to Connect" when accessing `https://YOUR_CI_CD_IP:8080`, run these diagnostic commands:
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cd /opt/registry
# Check if all containers are running
docker compose ps
# Check container logs for errors
docker compose logs nginx
docker compose logs registry-ui
docker compose logs registry
# Check if nginx is listening on port 8080
netstat -tlnp | grep :8080
# Test nginx directly
curl -k https://localhost:8080
# Exit SERVICE_USER shell
exit
```
**Common Issues and Solutions:**
- **Container not running**: Run `docker compose up -d` to start containers
- **Port conflict**: Check if port 8080 is already in use
- **SSL certificate issues**: Verify the certificate files exist and have correct permissions
- **Firewall blocking**: Ensure port 8080 is open in your firewall
#### 4.6.3 Fix Container Restart Issues
If containers are restarting repeatedly, check the logs and fix the configuration:
```bash
# Switch to SERVICE_USER (registry directory owner)
sudo su - SERVICE_USER
cd /opt/registry
# Stop all containers
docker compose down
# Check if SSL certificates exist
ls -la ssl/
# If certificates don't exist, generate them
if [ ! -f ssl/registry.crt ]; then
echo "Generating SSL certificates..."
mkdir -p ssl
openssl req -x509 -newkey rsa:4096 -keyout ssl/registry.key -out ssl/registry.crt -days 365 -nodes -subj "/C=US/ST=State/L=City/O=Organization/CN=YOUR_CI_CD_IP"
chmod 600 ssl/registry.key
chmod 644 ssl/registry.crt
fi
# Check if nginx.conf exists
ls -la nginx.conf
# If nginx.conf doesn't exist, create it
if [ ! -f nginx.conf ]; then
echo "Creating nginx configuration..."
cat > nginx.conf << 'EOF'
events {
worker_connections 1024;
}
http {
upstream registry_ui {
server registry-ui:80;
}
upstream registry_api {
server registry:5000;
}
server {
listen 443 ssl;
server_name YOUR_CI_CD_IP;
ssl_certificate /etc/nginx/ssl/registry.crt;
ssl_certificate_key /etc/nginx/ssl/registry.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
# Proxy registry API requests
location /v2/ {
proxy_pass http://registry_api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Proxy registry UI requests
location / {
proxy_pass http://registry_ui;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
EOF
fi
# Replace YOUR_CI_CD_IP with actual IP in nginx.conf
sed -i "s/YOUR_CI_CD_IP/YOUR_ACTUAL_CI_CD_IP/g" nginx.conf
# Start containers and check logs
docker compose up -d
# Wait a moment, then check logs
sleep 5
docker compose logs nginx
docker compose logs registry
# Exit SERVICE_USER shell
exit
```
**Important**: Replace `YOUR_ACTUAL_CI_CD_IP` with your actual CI/CD Linode IP address in the command above.
#### 4.7 Test Registry Setup
```bash
@ -670,10 +846,10 @@ sudo su - SERVICE_USER
cd /opt/registry
docker compose ps
# Test registry API (HTTPS)
curl -k https://localhost:5000/v2/_catalog
# Test registry API (HTTPS via nginx)
curl -k https://localhost:8080/v2/_catalog
# Test registry UI (HTTPS)
# Test registry UI (HTTPS via nginx)
curl -I https://localhost:8080
# Test Docker push/pull (optional but recommended)
@ -682,28 +858,28 @@ echo "FROM alpine:latest" > /tmp/test.Dockerfile
echo "RUN echo 'Hello from test image'" >> /tmp/test.Dockerfile
# Build and tag test image
docker build -f /tmp/test.Dockerfile -t localhost:5000/test:latest /tmp
docker build -f /tmp/test.Dockerfile -t localhost:8080/test:latest /tmp
# Push to registry
docker push localhost:5000/test:latest
docker push localhost:8080/test:latest
# Verify image is in registry
curl -k https://localhost:5000/v2/_catalog
curl -k https://localhost:5000/v2/test/tags/list
curl -k https://localhost:8080/v2/_catalog
curl -k https://localhost:8080/v2/test/tags/list
# Pull image back (verifies pull works)
docker rmi localhost:5000/test:latest
docker pull localhost:5000/test:latest
docker rmi localhost:8080/test:latest
docker pull localhost:8080/test:latest
# Clean up test image completely
# Remove from local Docker
docker rmi localhost:5000/test:latest
docker rmi localhost:8080/test:latest
# Clean up test file
rm /tmp/test.Dockerfile
# Clean up test repository using registry UI
# 1. Open your browser and go to: http://YOUR_CI_CD_IP:8080
# 1. Open your browser and go to: https://YOUR_CI_CD_IP:8080
# 2. You should see the 'test' repository listed
# 3. Click on the 'test' repository
# 4. Click the delete button (trash icon) next to the 'latest' tag
@ -711,7 +887,7 @@ rm /tmp/test.Dockerfile
# 6. The test repository should now be removed
# Verify registry is empty
curl -k https://localhost:5000/v2/_catalog
curl -k https://localhost:8080/v2/_catalog
# Exit SERVICE_USER shell
exit
@ -755,7 +931,7 @@ sudo tee /etc/docker/daemon.json << EOF
"insecure-registries": [],
"registry-mirrors": [],
"auths": {
"YOUR_CI_CD_IP:5000": {
"YOUR_CI_CD_IP:8080": {
"auth": "$(echo -n "${PUSH_USER}:${PUSH_PASSWORD}" | base64)"
}
}
@ -763,6 +939,8 @@ sudo tee /etc/docker/daemon.json << EOF
EOF
```
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
#### 5.2 Restart Docker
```bash
@ -1351,161 +1529,4 @@ RUST_BACKTRACE=1
EOF
```
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
**Default Environment Variables** (from `docker-compose.yml`):
- `POSTGRES_DB=sharenet` - PostgreSQL database name
- `POSTGRES_USER=sharenet` - PostgreSQL username
- `POSTGRES_PASSWORD=changeme` - PostgreSQL password (should be changed)
- `REGISTRY=your-username/sharenet` - Docker registry path (used as fallback)
- `IMAGE_NAME=your-username/sharenet` - Docker image name (used as fallback)
- `IMAGE_TAG=latest` - Docker image tag (used as fallback)
**Note**: The database user and database name can be controlled via the `POSTGRES_USER` and `POSTGRES_DB` secrets in your Forgejo repository settings. If you set these secrets, they will override the default values used in this environment file.
**Security Note**: Always change the default `POSTGRES_PASSWORD` from `changeme` to a strong, unique password in production.
#### 18.4 Verify Repository Contents
```bash
# Check that the docker-compose.yml file is present
ls -la docker-compose.yml
# Check that the nginx configuration is present
ls -la nginx/nginx.conf
# Check that the CI/CD workflow is present
ls -la .forgejo/workflows/ci.yml
# Check the repository structure
ls -la
# Verify the docker-compose.yml content
head -20 docker-compose.yml
# Verify the nginx configuration content
head -20 nginx/nginx.conf
# Verify the CI/CD workflow content
head -20 .forgejo/workflows/ci.yml
```
**Expected output**: You should see the `docker-compose.yml` file, `nginx/nginx.conf` file, `.forgejo/workflows/ci.yml` file, and other project files from your repository.
#### 18.5 Deployment Scripts
**Important**: The repository includes pre-configured deployment scripts in the `scripts/` directory that are used by the CI/CD pipeline. These scripts handle safe production deployments with database migrations, backups, and rollback capabilities.
**Repository Scripts** (used by CI/CD pipeline):
- `scripts/deploy.sh` - Main deployment script with migration support
- `scripts/deploy-local.sh` - Local development deployment script
- `scripts/migrate.sh` - Database migration management
- `scripts/validate_migrations.sh` - Migration validation
- `scripts/monitor.sh` - Comprehensive monitoring script for both CI/CD and production environments
- `scripts/cleanup.sh` - Comprehensive cleanup script for both CI/CD and production environments
- `scripts/backup.sh` - Comprehensive backup script for both CI/CD and production environments
**To use the repository deployment scripts**:
```bash
# The scripts are already available in the cloned repository
cd /opt/APP_NAME
# Make the scripts executable
chmod +x scripts/deploy.sh scripts/deploy-local.sh
# Test local deployment
./scripts/deploy-local.sh status
# Run local deployment
./scripts/deploy-local.sh deploy
# Test production deployment (dry run)
./scripts/deploy.sh check
# Run production deployment
./scripts/deploy.sh deploy
```
**Alternative: Create a local copy for convenience**:
```bash
# Copy the local deployment script to the application directory for easy access
cp scripts/deploy-local.sh /opt/APP_NAME/deploy-local.sh
chmod +x /opt/APP_NAME/deploy-local.sh
# Test the local copy
cd /opt/APP_NAME
./deploy-local.sh status
```
**Note**: The repository scripts are more comprehensive and include proper error handling, colored output, and multiple commands. The `scripts/deploy.sh` is used by the CI/CD pipeline and includes database migration handling, backup creation, and rollback capabilities. The `scripts/deploy-local.sh` is designed for local development deployments and includes status checking, restart, and log viewing capabilities.
#### 18.6 Backup Script
**Important**: The repository includes a pre-configured backup script in the `scripts/` directory that can be used for both CI/CD and production backup operations.
**Repository Script**:
- `scripts/backup.sh` - Comprehensive backup script with support for both CI/CD and production environments
**To use the repository backup script**:
```bash
# The script is already available in the cloned repository
cd /opt/APP_NAME
# Make the script executable
chmod +x scripts/backup.sh
# Test production backup (dry run first)
./scripts/backup.sh --type production --app-name APP_NAME --dry-run
# Run production backup
./scripts/backup.sh --type production --app-name APP_NAME
# Test CI/CD backup (dry run first)
./scripts/backup.sh --type ci-cd --app-name APP_NAME --dry-run
```
**Alternative: Create a local copy for convenience**:
```bash
# Copy the script to the application directory for easy access
cp scripts/backup.sh /opt/APP_NAME/backup-local.sh
chmod +x /opt/APP_NAME/backup-local.sh
# Test the local copy (dry run)
cd /opt/APP_NAME
./backup-local.sh --type production --app-name APP_NAME --dry-run
```
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, dry-run mode, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate backup operations.
#### 18.6.1 Set Up Automated Backup Scheduling
```bash
# Create a cron job to run backups daily at 2 AM using the repository script
(crontab -l 2>/dev/null; echo "0 2 * * * cd /opt/APP_NAME && ./scripts/backup.sh --type production --app-name APP_NAME >> /opt/APP_NAME/backup.log 2>&1") | crontab -
# Verify the cron job was added
crontab -l
```
**What this does:**
- **Runs automatically**: The backup script runs every day at 2:00 AM
- **Frequency**: Daily backups to ensure minimal data loss
- **Logging**: All backup output is logged to `/opt/APP_NAME/backup.log`
- **Retention**: The script automatically keeps only the last 7 days of backups (configurable)
**To test the backup manually:**
```bash
cd /opt/APP_NAME
./scripts/backup.sh --type production --app-name APP_NAME
```
**To view backup logs:**
```bash
tail -f /opt/APP_NAME/backup.log
```
**Alternative: Use a local copy for automated backup**:
```bash
# If you created a local copy, use that instead
(crontab -l 2>/dev/null; echo "0 2 * * * cd /opt/APP_NAME && ./backup-local.sh --type production --app-name APP_NAME >> /opt/APP_NAME/backup.log 2>&1") | crontab -
```
**Important**: Replace `YOUR_CI_CD_IP`