Update prod configuration fail2ban and ufw steps
Some checks are pending
CI/CD Pipeline with Secure Ephemeral PiP / test-backend (push) Waiting to run
CI/CD Pipeline with Secure Ephemeral PiP / test-frontend (push) Blocked by required conditions
CI/CD Pipeline with Secure Ephemeral PiP / build-backend (push) Blocked by required conditions
CI/CD Pipeline with Secure Ephemeral PiP / build-frontend (push) Blocked by required conditions

This commit is contained in:
continuist 2025-09-06 12:08:55 -04:00
parent b0234f13b5
commit 1ea4dc32e5

View file

@ -1496,7 +1496,99 @@ sudo apt install -y \
python3-certbot-nginx python3-certbot-nginx
``` ```
#### 9.5 Secure SSH Configuration #### 9.5 Configure Firewall and Fail2ban FIRST - Before Any Services
**SECURITY FIRST**: Configure firewall and intrusion prevention BEFORE any services are installed or exposed to prevent attackers from exploiting open ports.
```bash
# Configure secure firewall defaults
sudo ufw --force enable
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
```
**Security Note**: We only allow ports 80 and 443 for external access. The application services (backend on 3001, frontend on 3000) are only accessible through the Nginx reverse proxy, which provides better security and SSL termination.
**Fail2ban Configuration**:
**Fail2ban** is an intrusion prevention system that monitors logs and automatically blocks IP addresses showing malicious behavior.
```bash
# Install fail2ban (if not already installed)
sudo apt install -y fail2ban
# Create a custom jail configuration
sudo tee /etc/fail2ban/jail.local > /dev/null << 'EOF'
[DEFAULT]
# Ban time in seconds (24 hours)
bantime = 86400
# Find time in seconds (10 minutes)
findtime = 600
# Max retries before ban
maxretry = 3
# Ban action (use ufw since we're using ufw firewall)
banaction = ufw
# Log level
loglevel = INFO
# Log target
logtarget = /var/log/fail2ban.log
# SSH protection
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
# Note: Nginx protection is handled by the firewall and application-level security
# Docker containers are isolated, and Nginx logs are not directly accessible to fail2ban
# Web attack protection is provided by:
# 1. UFW firewall (ports 80/443 only)
# 2. Nginx security headers and rate limiting
# 3. Application-level input validation
EOF
# Enable and start fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
# Verify fail2ban is running
sudo systemctl status fail2ban
# Check current jails
sudo fail2ban-client status
```
**What this does**:
- **SSH Protection**: Blocks IPs that fail SSH login 3 times in 10 minutes
- **24-hour bans**: Banned IPs are blocked for 24 hours
- **Automatic monitoring**: Continuously watches SSH logs
**Web Security Note**: Since Nginx runs in a Docker container, web attack protection is handled by:
- **UFW Firewall**: Only allows ports 80/443 (no direct access to app services)
- **Nginx Security**: Built-in rate limiting and security headers
- **Application Security**: Input validation in the backend/frontend code
**Monitoring Fail2ban**:
```bash
# Check banned IPs
sudo fail2ban-client status sshd
# Unban an IP if needed
sudo fail2ban-client set sshd unbanip IP_ADDRESS
# View fail2ban logs
sudo tail -f /var/log/fail2ban.log
# Check all active jails
sudo fail2ban-client status
```
#### 9.6 Secure SSH Configuration
**Critical Security Step**: After setting up SSH key authentication, you must disable password authentication and root login to secure your Production server. **Critical Security Step**: After setting up SSH key authentication, you must disable password authentication and root login to secure your Production server.
@ -1682,13 +1774,53 @@ sudo apt install -y podman
podman --version podman --version
``` ```
#### 11.2 Configure Podman for Production Service Account #### 11.2 Configure Podman for Production Service Account Only
**Security Restriction**: Configure Podman to only be usable by PROD_SERVICE_USER to prevent unauthorized container operations.
```bash ```bash
# Podman runs rootless by default, no group membership needed # Create podman group for controlled access
# The subuid/subgid ranges configured in Step 1.5 enable rootless operation sudo groupadd podman
# Add PROD_SERVICE_USER to podman group
sudo usermod -aG podman PROD_SERVICE_USER
# Configure Podman socket permissions (restrict to podman group)
sudo mkdir -p /etc/containers
sudo tee /etc/containers/containers.conf > /dev/null << 'EOF'
[engine]
events_logger = "file"
[network]
dns_bind_port = 53
[engine.runtimes]
runc = [
"/usr/bin/runc",
]
[engine.socket_group]
group = "podman"
EOF
# Configure user namespace for rootless operation (if not already done)
echo 'kernel.unprivileged_userns_clone=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
# Verify PROD_SERVICE_USER can access Podman
sudo -u PROD_SERVICE_USER podman --version
# Test that other users cannot access Podman socket
sudo -u PROD_DEPLOY_USER podman --version || echo "Good: PROD_DEPLOY_USER cannot access Podman"
``` ```
**What this does**:
- Creates dedicated `podman` group for controlled access
- Restricts Podman socket access to only members of the `podman` group
- Ensures only PROD_SERVICE_USER can execute Podman commands
- Prevents PROD_DEPLOY_USER and other users from running containers
- Maintains rootless operation for security
#### 11.4 Create Application Directory #### 11.4 Create Application Directory
```bash ```bash
@ -1891,116 +2023,22 @@ The `prod-pod.yaml` file is specifically designed for production deployment and
4. Waits for all services to be healthy 4. Waits for all services to be healthy
5. Verifies the deployment was successful 5. Verifies the deployment was successful
### Step 13: Configure Security
#### 13.1 Configure Firewall
### Step 13: Test Production Setup
#### 13.1 Test Podman Installation
```bash ```bash
sudo ufw --force enable # Test Podman installation (run as PROD_SERVICE_USER)
sudo ufw default deny incoming sudo -u PROD_SERVICE_USER podman --version
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
``` ```
**Security Note**: We only allow ports 80 and 443 for external access. The application services (backend on 3001, frontend on 3000) are only accessible through the Nginx reverse proxy, which provides better security and SSL termination. #### 13.2 Test Forgejo Container Registry Access
#### 13.2 Configure Fail2ban
**Fail2ban** is an intrusion prevention system that monitors logs and automatically blocks IP addresses showing malicious behavior.
```bash ```bash
# Install fail2ban (if not already installed) # Test pulling an image from Forgejo Container Registry (run as PROD_SERVICE_USER)
sudo apt install -y fail2ban sudo -u PROD_SERVICE_USER podman pull YOUR_CI_CD_IP/APP_NAME/test:latest
# Create a custom jail configuration
sudo tee /etc/fail2ban/jail.local > /dev/null << 'EOF'
[DEFAULT]
# Ban time in seconds (24 hours)
bantime = 86400
# Find time in seconds (10 minutes)
findtime = 600
# Max retries before ban
maxretry = 3
# Ban action (use ufw since we're using ufw firewall)
banaction = ufw
# Log level
loglevel = INFO
# Log target
logtarget = /var/log/fail2ban.log
# SSH protection
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
# Note: Nginx protection is handled by the firewall and application-level security
# Docker containers are isolated, and Nginx logs are not directly accessible to fail2ban
# Web attack protection is provided by:
# 1. UFW firewall (ports 80/443 only)
# 2. Nginx security headers and rate limiting
# 3. Application-level input validation
EOF
# Enable and start fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
# Verify fail2ban is running
sudo systemctl status fail2ban
# Check current jails
sudo fail2ban-client status
```
**What this does**:
- **SSH Protection**: Blocks IPs that fail SSH login 3 times in 10 minutes
- **24-hour bans**: Banned IPs are blocked for 24 hours
- **Automatic monitoring**: Continuously watches SSH logs
**Web Security Note**: Since Nginx runs in a Docker container, web attack protection is handled by:
- **UFW Firewall**: Only allows ports 80/443 (no direct access to app services)
- **Nginx Security**: Built-in rate limiting and security headers
- **Application Security**: Input validation in the backend/frontend code
**Monitoring Fail2ban**:
```bash
# Check banned IPs
sudo fail2ban-client status sshd
# Unban an IP if needed
sudo fail2ban-client set sshd unbanip IP_ADDRESS
# View fail2ban logs
sudo tail -f /var/log/fail2ban.log
# Check all active jails
sudo fail2ban-client status
```
**Why This Matters for Production**:
- **Your server is exposed**: The Production Linode is accessible from the internet
- **Automated attacks**: Bots constantly scan for vulnerable servers
- **Resource protection**: Prevents attackers from consuming CPU/memory
- **Security layers**: Works with the firewall to provide defense in depth
### Step 14: Test Production Setup
#### 14.1 Test Podman Installation
```bash
podman --version
```
#### 14.2 Test Forgejo Container Registry Access
```bash
# Test pulling an image from Forgejo Container Registry
podman pull YOUR_CI_CD_IP/APP_NAME/test:latest
``` ```
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address. **Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
@ -2013,7 +2051,7 @@ podman pull YOUR_CI_CD_IP/APP_NAME/test:latest
## Part 3: Final Configuration and Testing ## Part 3: Final Configuration and Testing
### Step 15: Configure Forgejo Repository Secrets ### Step 14: Configure Forgejo Repository Secrets
Go to your Forgejo repository and add these secrets in **Settings → Secrets and Variables → Actions**: Go to your Forgejo repository and add these secrets in **Settings → Secrets and Variables → Actions**:
@ -2106,15 +2144,15 @@ Go to your Forgejo repository and add these secrets in **Settings → Secrets an
**Note**: This setup uses custom Dockerfiles for testing environments with base images. The CI pipeline automatically checks if base images exist in Forgejo Container Registry and pulls them from Docker Hub only when needed, eliminating rate limiting issues and providing better control over the testing environment. **Note**: This setup uses custom Dockerfiles for testing environments with base images. The CI pipeline automatically checks if base images exist in Forgejo Container Registry and pulls them from Docker Hub only when needed, eliminating rate limiting issues and providing better control over the testing environment.
### Step 16: Test Complete Pipeline ### Step 15: Test Complete Pipeline
#### 16.1 Trigger a Test Build #### 15.1 Trigger a Test Build
1. **Make a small change** to your repository (e.g., update a comment or add a test file) 1. **Make a small change** to your repository (e.g., update a comment or add a test file)
2. **Commit and push** the changes to trigger the CI/CD pipeline 2. **Commit and push** the changes to trigger the CI/CD pipeline
3. **Monitor the build** in your Forgejo repository → Actions tab 3. **Monitor the build** in your Forgejo repository → Actions tab
#### 16.2 Verify Pipeline Steps #### 15.2 Verify Pipeline Steps
The pipeline should execute these steps in order: The pipeline should execute these steps in order:
@ -2127,7 +2165,7 @@ The pipeline should execute these steps in order:
7. **Push to Registry**: Push images to Forgejo Container Registry from DinD 7. **Push to Registry**: Push images to Forgejo Container Registry from DinD
8. **Deploy to Production**: Deploy to production server 8. **Deploy to Production**: Deploy to production server
#### 16.3 Check Forgejo Container Registry #### 15.3 Check Forgejo Container Registry
```bash ```bash
# On CI/CD Linode # On CI/CD Linode
@ -2148,7 +2186,7 @@ curl -k https://YOUR_CI_CD_IP:4443/v2/_catalog
# Expected: This should return authentication error without credentials # Expected: This should return authentication error without credentials
``` ```
#### 16.4 Verify Production Deployment #### 15.4 Verify Production Deployment
```bash ```bash
# On Production Linode # On Production Linode
@ -2166,16 +2204,16 @@ podman logs sharenet-production-pod-backend
podman logs sharenet-production-pod-frontend podman logs sharenet-production-pod-frontend
``` ```
#### 16.5 Test Application Functionality #### 15.5 Test Application Functionality
1. **Frontend**: Visit your production URL (IP address) 1. **Frontend**: Visit your production URL (IP address)
2. **Backend API**: Test API endpoints 2. **Backend API**: Test API endpoints
3. **Database**: Verify database connections 3. **Database**: Verify database connections
4. **Logs**: Check for any errors in application logs 4. **Logs**: Check for any errors in application logs
### Step 17: Final Verification ### Step 16: Final Verification
#### 17.1 Security Check #### 16.1 Security Check
```bash ```bash
# Check firewall status # Check firewall status
@ -2188,7 +2226,7 @@ sudo systemctl status fail2ban
sudo grep "PasswordAuthentication" /etc/ssh/sshd_config sudo grep "PasswordAuthentication" /etc/ssh/sshd_config
``` ```
#### 17.2 Performance Check #### 16.2 Performance Check
```bash ```bash
# Check system resources # Check system resources
@ -2201,7 +2239,7 @@ df -h
docker system df docker system df
``` ```
#### 17.3 Backup Verification #### 16.3 Backup Verification
```bash ```bash
# Test backup script # Test backup script
@ -2212,16 +2250,16 @@ cd /opt/APP_NAME
./scripts/backup.sh ./scripts/backup.sh
``` ```
### Step 18: Documentation and Maintenance ### Step 17: Documentation and Maintenance
#### 18.1 Update Documentation #### 17.1 Update Documentation
1. **Update README.md** with deployment information 1. **Update README.md** with deployment information
2. **Document environment variables** and their purposes 2. **Document environment variables** and their purposes
3. **Create troubleshooting guide** for common issues 3. **Create troubleshooting guide** for common issues
4. **Document backup and restore procedures** 4. **Document backup and restore procedures**
#### 18.2 Set Up Monitoring Alerts #### 17.2 Set Up Monitoring Alerts
```bash ```bash
# Set up monitoring cron job # Set up monitoring cron job
@ -2231,7 +2269,7 @@ cd /opt/APP_NAME
tail -f /tmp/monitor.log tail -f /tmp/monitor.log
``` ```
#### 18.3 Regular Maintenance Tasks #### 17.3 Regular Maintenance Tasks
**Daily:** **Daily:**
- Check application logs for errors - Check application logs for errors
@ -2295,11 +2333,11 @@ The CI pipeline has been updated to:
Since the repository is public, applications can pull images anonymously: Since the repository is public, applications can pull images anonymously:
```bash ```bash
# Pull by tag # Pull by tag (run as PROD_SERVICE_USER in production)
podman pull forgejo.example.com/owner/repo/backend:latest sudo -u PROD_SERVICE_USER podman pull forgejo.example.com/owner/repo/backend:latest
# Pull by digest # Pull by digest (run as PROD_SERVICE_USER in production)
podman pull forgejo.example.com/owner/repo/backend@sha256:abc123... sudo -u PROD_SERVICE_USER podman pull forgejo.example.com/owner/repo/backend@sha256:abc123...
``` ```
#### Image Naming Convention #### Image Naming Convention
@ -2344,12 +2382,12 @@ For example:
# Test registry access # Test registry access
podman login REGISTRY_HOST -u REGISTRY_USERNAME -p REGISTRY_TOKEN podman login REGISTRY_HOST -u REGISTRY_USERNAME -p REGISTRY_TOKEN
# List available images # List available images (run as PROD_SERVICE_USER in production)
podman search REGISTRY_HOST/OWNER_REPO sudo -u PROD_SERVICE_USER podman search REGISTRY_HOST/OWNER_REPO
# Pull and verify image # Pull and verify image (run as PROD_SERVICE_USER in production)
podman pull REGISTRY_HOST/OWNER_REPO/backend:TAG sudo -u PROD_SERVICE_USER podman pull REGISTRY_HOST/OWNER_REPO/backend:TAG
podman image inspect REGISTRY_HOST/OWNER_REPO/backend:TAG sudo -u PROD_SERVICE_USER podman image inspect REGISTRY_HOST/OWNER_REPO/backend:TAG
``` ```
--- ---