Update Part 2 to use docker compose for deployment to prod
Some checks are pending
CI/CD Pipeline (Fully Isolated DinD) / Test Backend and Frontend (Fully Isolated DinD) (push) Waiting to run
CI/CD Pipeline (Fully Isolated DinD) / Build and Push Docker Images (DinD) (push) Blocked by required conditions
CI/CD Pipeline (Fully Isolated DinD) / Deploy to Production (push) Blocked by required conditions
Some checks are pending
CI/CD Pipeline (Fully Isolated DinD) / Test Backend and Frontend (Fully Isolated DinD) (push) Waiting to run
CI/CD Pipeline (Fully Isolated DinD) / Build and Push Docker Images (DinD) (push) Blocked by required conditions
CI/CD Pipeline (Fully Isolated DinD) / Deploy to Production (push) Blocked by required conditions
This commit is contained in:
parent
ffb055922a
commit
1cba835799
3 changed files with 298 additions and 30 deletions
|
@ -208,7 +208,16 @@ jobs:
|
||||||
- name: Make scripts executable
|
- name: Make scripts executable
|
||||||
run: chmod +x scripts/*.sh
|
run: chmod +x scripts/*.sh
|
||||||
|
|
||||||
- name: Validate migrations before deployment
|
- name: Configure Docker for Harbor access
|
||||||
|
run: |
|
||||||
|
# Configure Docker to access Harbor registry on CI Linode
|
||||||
|
echo '{"insecure-registries": ["${{ secrets.CI_HOST }}:5000"]}' | sudo tee /etc/docker/daemon.json
|
||||||
|
sudo systemctl restart docker
|
||||||
|
|
||||||
|
# Wait for Docker to be ready
|
||||||
|
timeout 30 bash -c 'until docker info; do sleep 1; done'
|
||||||
|
|
||||||
|
- name: Validate migration files
|
||||||
run: |
|
run: |
|
||||||
echo "Validating migration files before deployment..."
|
echo "Validating migration files before deployment..."
|
||||||
./scripts/validate_migrations.sh --verbose || {
|
./scripts/validate_migrations.sh --verbose || {
|
||||||
|
@ -216,7 +225,20 @@ jobs:
|
||||||
exit 1
|
exit 1
|
||||||
}
|
}
|
||||||
|
|
||||||
- name: Deploy application
|
- name: Pull and deploy application
|
||||||
run: |
|
run: |
|
||||||
# Run deployment using the deployment script
|
# Pull latest images from Harbor registry
|
||||||
./scripts/deploy.sh deploy
|
echo "Pulling latest images from Harbor registry..."
|
||||||
|
docker compose -f docker-compose.prod.yml pull
|
||||||
|
|
||||||
|
# Deploy the application stack
|
||||||
|
echo "Deploying application stack..."
|
||||||
|
docker compose -f docker-compose.prod.yml up -d
|
||||||
|
|
||||||
|
# Wait for all services to be healthy
|
||||||
|
echo "Waiting for all services to be healthy..."
|
||||||
|
timeout 120 bash -c 'until docker compose -f docker-compose.prod.yml ps | grep -q "healthy" && docker compose -f docker-compose.prod.yml ps | grep -q "Up"; do sleep 2; done'
|
||||||
|
|
||||||
|
# Verify deployment
|
||||||
|
echo "Verifying deployment..."
|
||||||
|
docker compose -f docker-compose.prod.yml ps
|
|
@ -1192,8 +1192,8 @@ The DinD container is managed as an isolated environment where all CI/CD operati
|
||||||
|
|
||||||
**How it works:**
|
**How it works:**
|
||||||
- **Job 1 (Testing)**: Creates PostgreSQL, Rust, and Node.js containers inside DinD
|
- **Job 1 (Testing)**: Creates PostgreSQL, Rust, and Node.js containers inside DinD
|
||||||
- **Job 2 (Building)**: Uses DinD directly for building and pushing Docker images
|
- **Job 2 (Building)**: Uses DinD directly for building and pushing Docker images to Harbor
|
||||||
- **Job 3 (Deployment)**: Runs on production runner (no DinD needed)
|
- **Job 3 (Deployment)**: Production runner pulls images from Harbor and deploys using `docker-compose.prod.yml`
|
||||||
|
|
||||||
**Testing DinD Setup:**
|
**Testing DinD Setup:**
|
||||||
|
|
||||||
|
@ -1215,7 +1215,30 @@ docker exec ci-cd-dind docker rmi localhost:5000/test/dind-test:latest
|
||||||
- Docker commands should work inside DinD
|
- Docker commands should work inside DinD
|
||||||
- Harbor push/pull should work from DinD
|
- Harbor push/pull should work from DinD
|
||||||
|
|
||||||
#### 8.4 Monitoring Script
|
#### 8.4 Production Deployment Architecture
|
||||||
|
|
||||||
|
The production deployment uses a separate Docker Compose file (`docker-compose.prod.yml`) that pulls built images from the Harbor registry and deploys the complete application stack.
|
||||||
|
|
||||||
|
**Production Stack Components:**
|
||||||
|
- **PostgreSQL**: Production database with persistent storage
|
||||||
|
- **Backend**: Rust application built and pushed from CI/CD
|
||||||
|
- **Frontend**: Next.js application built and pushed from CI/CD
|
||||||
|
- **Nginx**: Reverse proxy with SSL termination
|
||||||
|
|
||||||
|
**Deployment Flow:**
|
||||||
|
1. **Production Runner**: Runs on Production Linode with `production` label
|
||||||
|
2. **Image Pull**: Pulls latest images from Harbor registry on CI Linode
|
||||||
|
3. **Stack Deployment**: Uses `docker-compose.prod.yml` to deploy complete stack
|
||||||
|
4. **Health Verification**: Ensures all services are healthy before completion
|
||||||
|
|
||||||
|
**Key Benefits:**
|
||||||
|
- **🔄 Image Registry**: Centralized image storage in Harbor
|
||||||
|
- **📦 Consistent Deployment**: Same images tested in CI are deployed to production
|
||||||
|
- **⚡ Fast Deployment**: Only pulls changed images
|
||||||
|
- **🛡️ Rollback Capability**: Can easily rollback to previous image versions
|
||||||
|
- **📊 Health Monitoring**: Built-in health checks for all services
|
||||||
|
|
||||||
|
#### 8.5 Monitoring Script
|
||||||
|
|
||||||
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for both CI/CD and production monitoring.
|
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for both CI/CD and production monitoring.
|
||||||
|
|
||||||
|
@ -1548,16 +1571,162 @@ ssh production
|
||||||
|
|
||||||
**Expected output**: You should be able to SSH to the production server without a password prompt.
|
**Expected output**: You should be able to SSH to the production server without a password prompt.
|
||||||
|
|
||||||
### Step 19: Test Production Setup
|
### Step 19: Set Up Forgejo Runner for Production Deployment
|
||||||
|
|
||||||
#### 19.1 Test Docker Installation
|
**Important**: The Production Linode needs a Forgejo runner to execute the deployment job from the CI/CD workflow. This runner will pull images from Harbor and deploy using `docker-compose.prod.yml`.
|
||||||
|
|
||||||
|
#### 19.1 Install Forgejo Runner
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download the latest Forgejo runner
|
||||||
|
wget -O forgejo-runner https://codeberg.org/forgejo/runner/releases/download/v4.0.0/forgejo-runner-linux-amd64
|
||||||
|
|
||||||
|
# Make it executable
|
||||||
|
chmod +x forgejo-runner
|
||||||
|
|
||||||
|
# Move to system location
|
||||||
|
sudo mv forgejo-runner /usr/bin/forgejo-runner
|
||||||
|
|
||||||
|
# Verify installation
|
||||||
|
forgejo-runner --version
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 19.2 Create Runner User and Directory
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create dedicated user for the runner
|
||||||
|
sudo useradd -r -s /bin/bash -m -d /home/forgejo-runner forgejo-runner
|
||||||
|
|
||||||
|
# Create runner directory
|
||||||
|
sudo mkdir -p /opt/forgejo-runner
|
||||||
|
sudo chown forgejo-runner:forgejo-runner /opt/forgejo-runner
|
||||||
|
|
||||||
|
# Add runner user to docker group
|
||||||
|
sudo usermod -aG docker forgejo-runner
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 19.3 Get Registration Token
|
||||||
|
|
||||||
|
1. Go to your Forgejo repository
|
||||||
|
2. Navigate to **Settings → Actions → Runners**
|
||||||
|
3. Click **"New runner"**
|
||||||
|
4. Copy the registration token
|
||||||
|
|
||||||
|
#### 19.4 Register the Production Runner
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Switch to runner user
|
||||||
|
sudo su - forgejo-runner
|
||||||
|
|
||||||
|
# Register the runner with production label
|
||||||
|
forgejo-runner register \
|
||||||
|
--instance https://your-forgejo-instance \
|
||||||
|
--token YOUR_REGISTRATION_TOKEN \
|
||||||
|
--name "production-runner" \
|
||||||
|
--labels "production,ubuntu-latest,docker" \
|
||||||
|
--no-interactive
|
||||||
|
|
||||||
|
# Copy configuration to system location
|
||||||
|
sudo cp /home/forgejo-runner/.runner /opt/forgejo-runner/.runner
|
||||||
|
sudo chown forgejo-runner:forgejo-runner /opt/forgejo-runner/.runner
|
||||||
|
sudo chmod 600 /opt/forgejo-runner/.runner
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important**: Replace `your-forgejo-instance` with your actual Forgejo instance URL and `YOUR_REGISTRATION_TOKEN` with the token you copied from Step 19.3.
|
||||||
|
|
||||||
|
#### 19.5 Create Systemd Service
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create systemd service file
|
||||||
|
sudo tee /etc/systemd/system/forgejo-runner.service > /dev/null << 'EOF'
|
||||||
|
[Unit]
|
||||||
|
Description=Forgejo Actions Runner (Production)
|
||||||
|
After=network.target docker.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=forgejo-runner
|
||||||
|
WorkingDirectory=/opt/forgejo-runner
|
||||||
|
ExecStart=/usr/bin/forgejo-runner daemon
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# Enable and start the service
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl enable forgejo-runner.service
|
||||||
|
sudo systemctl start forgejo-runner.service
|
||||||
|
|
||||||
|
# Verify the service is running
|
||||||
|
sudo systemctl status forgejo-runner.service
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 19.6 Test Runner Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check if the runner is running
|
||||||
|
sudo systemctl status forgejo-runner.service
|
||||||
|
|
||||||
|
# Check runner logs
|
||||||
|
sudo journalctl -u forgejo-runner.service -f --no-pager
|
||||||
|
|
||||||
|
# Verify runner appears in Forgejo
|
||||||
|
# Go to your Forgejo repository → Settings → Actions → Runners
|
||||||
|
# You should see your runner listed as "production-runner" with status "Online"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output**:
|
||||||
|
- `systemctl status` should show "active (running)"
|
||||||
|
- Forgejo web interface should show the runner as online with "production" label
|
||||||
|
|
||||||
|
**Important**: The CI/CD workflow (`.forgejo/workflows/ci.yml`) is already configured to use this production runner. The deploy job runs on `runs-on: [self-hosted, production]`, which means it will execute on any runner with the "production" label. When the workflow runs, it will:
|
||||||
|
|
||||||
|
1. Pull the latest Docker images from Harbor registry
|
||||||
|
2. Use the `docker-compose.prod.yml` file to deploy the application stack
|
||||||
|
3. Create the necessary environment variables for production deployment
|
||||||
|
4. Verify that all services are healthy after deployment
|
||||||
|
|
||||||
|
The production runner will automatically handle the deployment process when you push to the main branch.
|
||||||
|
|
||||||
|
#### 19.7 Understanding the Production Docker Compose Setup
|
||||||
|
|
||||||
|
The `docker-compose.prod.yml` file is specifically designed for production deployment and differs from development setups:
|
||||||
|
|
||||||
|
**Key Features**:
|
||||||
|
- **Image-based deployment**: Uses pre-built images from Harbor registry instead of building from source
|
||||||
|
- **Production networking**: All services communicate through a dedicated `sharenet-network`
|
||||||
|
- **Health checks**: Each service includes health checks to ensure proper startup order
|
||||||
|
- **Nginx reverse proxy**: Includes Nginx for SSL termination, load balancing, and security headers
|
||||||
|
- **Persistent storage**: PostgreSQL data is stored in a named volume for persistence
|
||||||
|
- **Environment variables**: Uses environment variables for configuration (set by the CI/CD workflow)
|
||||||
|
|
||||||
|
**Service Architecture**:
|
||||||
|
1. **PostgreSQL**: Database with health checks and persistent storage
|
||||||
|
2. **Backend**: Rust API service that waits for PostgreSQL to be healthy
|
||||||
|
3. **Frontend**: Next.js application that waits for backend to be healthy
|
||||||
|
4. **Nginx**: Reverse proxy that serves the frontend and proxies API requests to backend
|
||||||
|
|
||||||
|
**Deployment Process**:
|
||||||
|
1. The production runner pulls the latest images from Harbor registry
|
||||||
|
2. Creates environment variables for the deployment
|
||||||
|
3. Runs `docker compose -f docker-compose.prod.yml up -d`
|
||||||
|
4. Waits for all services to be healthy
|
||||||
|
5. Verifies the deployment was successful
|
||||||
|
|
||||||
|
### Step 20: Test Production Setup
|
||||||
|
|
||||||
|
#### 20.1 Test Docker Installation
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker --version
|
docker --version
|
||||||
docker compose --version
|
docker compose --version
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 19.2 Test Harbor Access
|
#### 20.2 Test Harbor Access
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Test pulling an image from the CI/CD Harbor registry
|
# Test pulling an image from the CI/CD Harbor registry
|
||||||
|
@ -1566,14 +1735,14 @@ docker pull YOUR_CI_CD_IP:8080/public/backend:latest
|
||||||
|
|
||||||
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
|
**Important**: Replace `YOUR_CI_CD_IP` with your actual CI/CD Linode IP address.
|
||||||
|
|
||||||
#### 19.3 Test Application Deployment
|
#### 20.3 Test Application Deployment
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd /opt/APP_NAME
|
cd /opt/APP_NAME
|
||||||
docker compose up -d
|
docker compose up -d
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 19.4 Verify Application Status
|
#### 20.4 Verify Application Status
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker compose ps
|
docker compose ps
|
||||||
|
@ -1590,9 +1759,9 @@ curl http://localhost:3001/health
|
||||||
|
|
||||||
## Part 3: Final Configuration and Testing
|
## Part 3: Final Configuration and Testing
|
||||||
|
|
||||||
### Step 20: Configure Forgejo Repository Secrets
|
### Step 21: Configure Forgejo Repository Secrets
|
||||||
|
|
||||||
#### 20.1 Required Repository Secrets
|
#### 21.1 Required Repository Secrets
|
||||||
|
|
||||||
Go to your Forgejo repository and add these secrets in **Settings → Secrets and Variables → Actions**:
|
Go to your Forgejo repository and add these secrets in **Settings → Secrets and Variables → Actions**:
|
||||||
|
|
||||||
|
@ -1608,16 +1777,16 @@ Go to your Forgejo repository and add these secrets in **Settings → Secrets an
|
||||||
- `DOMAIN`: Your domain name (e.g., `example.com`)
|
- `DOMAIN`: Your domain name (e.g., `example.com`)
|
||||||
- `EMAIL`: Your email for SSL certificate notifications
|
- `EMAIL`: Your email for SSL certificate notifications
|
||||||
|
|
||||||
#### 20.2 Configure Forgejo Actions Runner
|
#### 21.2 Configure Forgejo Actions Runner
|
||||||
|
|
||||||
##### 20.2.1 Get Runner Token
|
##### 21.2.1 Get Runner Token
|
||||||
|
|
||||||
1. Go to your Forgejo repository
|
1. Go to your Forgejo repository
|
||||||
2. Navigate to **Settings → Actions → Runners**
|
2. Navigate to **Settings → Actions → Runners**
|
||||||
3. Click **"New runner"**
|
3. Click **"New runner"**
|
||||||
4. Copy the registration token
|
4. Copy the registration token
|
||||||
|
|
||||||
##### 20.2.2 Configure Runner
|
##### 21.2.2 Configure Runner
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Switch to DEPLOY_USER on CI/CD Linode
|
# Switch to DEPLOY_USER on CI/CD Linode
|
||||||
|
@ -1636,14 +1805,14 @@ forgejo-runner register \
|
||||||
--no-interactive
|
--no-interactive
|
||||||
```
|
```
|
||||||
|
|
||||||
##### 20.2.3 Start Runner
|
##### 21.2.3 Start Runner
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl start forgejo-runner.service
|
sudo systemctl start forgejo-runner.service
|
||||||
sudo systemctl status forgejo-runner.service
|
sudo systemctl status forgejo-runner.service
|
||||||
```
|
```
|
||||||
|
|
||||||
##### 20.2.4 Test Runner Configuration
|
##### 21.2.4 Test Runner Configuration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Check if the runner is running
|
# Check if the runner is running
|
||||||
|
@ -1661,15 +1830,9 @@ sudo journalctl -u forgejo-runner.service -f --no-pager
|
||||||
- `systemctl status` should show "active (running)"
|
- `systemctl status` should show "active (running)"
|
||||||
- Forgejo web interface should show the runner as online
|
- Forgejo web interface should show the runner as online
|
||||||
|
|
||||||
**If something goes wrong**:
|
### Step 22: Set Up Monitoring and Cleanup
|
||||||
- Check logs: `sudo journalctl -u forgejo-runner.service -f`
|
|
||||||
- Verify token: Make sure the registration token is correct
|
|
||||||
- Check network: Ensure the runner can reach your Forgejo instance
|
|
||||||
- Restart service: `sudo systemctl restart forgejo-runner.service`
|
|
||||||
|
|
||||||
### Step 21: Set Up Monitoring and Cleanup
|
#### 22.1 Monitoring Script
|
||||||
|
|
||||||
#### 21.1 Monitoring Script
|
|
||||||
|
|
||||||
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for both CI/CD and production monitoring.
|
**Important**: The repository includes a pre-configured monitoring script in the `scripts/` directory that can be used for both CI/CD and production monitoring.
|
||||||
|
|
||||||
|
@ -1693,7 +1856,7 @@ chmod +x scripts/monitor.sh
|
||||||
|
|
||||||
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
|
**Note**: The repository script is more comprehensive and includes proper error handling, colored output, and support for both CI/CD and production environments. It automatically detects the environment and provides appropriate monitoring information.
|
||||||
|
|
||||||
#### 21.2 DinD Cleanup Script
|
#### 22.2 DinD Cleanup Script
|
||||||
|
|
||||||
**Important**: With the DinD setup, CI/CD operations are isolated in the DinD container. This means we can use a much simpler cleanup approach - just restart the DinD container for a fresh environment.
|
**Important**: With the DinD setup, CI/CD operations are isolated in the DinD container. This means we can use a much simpler cleanup approach - just restart the DinD container for a fresh environment.
|
||||||
|
|
||||||
|
@ -1722,7 +1885,7 @@ chmod +x scripts/dind-cleanup.sh
|
||||||
- ✅ **Fast execution**: No complex resource scanning needed
|
- ✅ **Fast execution**: No complex resource scanning needed
|
||||||
- ✅ **Reliable**: No risk of accidentally removing Harbor resources
|
- ✅ **Reliable**: No risk of accidentally removing Harbor resources
|
||||||
|
|
||||||
#### 21.3 Test DinD Cleanup Script
|
#### 22.3 Test DinD Cleanup Script
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Test DinD cleanup with dry run first
|
# Test DinD cleanup with dry run first
|
||||||
|
@ -1748,7 +1911,7 @@ docker exec ci-cd-dind docker run --rm alpine:latest echo "DinD cleanup successf
|
||||||
- Check DinD logs: `docker logs ci-cd-dind`
|
- Check DinD logs: `docker logs ci-cd-dind`
|
||||||
- Run manually: `bash -x scripts/dind-cleanup.sh`
|
- Run manually: `bash -x scripts/dind-cleanup.sh`
|
||||||
|
|
||||||
#### 21.4 Set Up Automated DinD Cleanup
|
#### 22.4 Set Up Automated DinD Cleanup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Create a cron job to run DinD cleanup daily at 2 AM
|
# Create a cron job to run DinD cleanup daily at 2 AM
|
||||||
|
|
83
docker-compose.prod.yml
Normal file
83
docker-compose.prod.yml
Normal file
|
@ -0,0 +1,83 @@
|
||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
postgres:
|
||||||
|
image: postgres:15-alpine
|
||||||
|
container_name: sharenet-postgres
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
POSTGRES_DB: ${POSTGRES_DB:-sharenet}
|
||||||
|
POSTGRES_USER: ${POSTGRES_USER:-sharenet}
|
||||||
|
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
ports:
|
||||||
|
- "5432:5432"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-sharenet}"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
networks:
|
||||||
|
- sharenet-network
|
||||||
|
|
||||||
|
backend:
|
||||||
|
image: ${REGISTRY}/${IMAGE_NAME:-sharenet}/backend:${IMAGE_TAG:-latest}
|
||||||
|
container_name: sharenet-backend
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
DATABASE_URL: postgresql://${POSTGRES_USER:-sharenet}:${POSTGRES_PASSWORD:-changeme}@postgres:5432/${POSTGRES_DB:-sharenet}
|
||||||
|
RUST_LOG: info
|
||||||
|
RUST_BACKTRACE: 1
|
||||||
|
ports:
|
||||||
|
- "3001:3001"
|
||||||
|
depends_on:
|
||||||
|
postgres:
|
||||||
|
condition: service_healthy
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3001/health || exit 1"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
networks:
|
||||||
|
- sharenet-network
|
||||||
|
|
||||||
|
frontend:
|
||||||
|
image: ${REGISTRY}/${IMAGE_NAME:-sharenet}/frontend:${IMAGE_TAG:-latest}
|
||||||
|
container_name: sharenet-frontend
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
NEXT_PUBLIC_API_HOST: backend
|
||||||
|
NEXT_PUBLIC_API_PORT: 3001
|
||||||
|
NODE_ENV: production
|
||||||
|
ports:
|
||||||
|
- "3000:3000"
|
||||||
|
depends_on:
|
||||||
|
backend:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- sharenet-network
|
||||||
|
|
||||||
|
nginx:
|
||||||
|
image: nginx:alpine
|
||||||
|
container_name: sharenet-nginx
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
- "443:443"
|
||||||
|
volumes:
|
||||||
|
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
|
||||||
|
- ./nginx/ssl:/etc/nginx/ssl:ro
|
||||||
|
depends_on:
|
||||||
|
- frontend
|
||||||
|
- backend
|
||||||
|
networks:
|
||||||
|
- sharenet-network
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
postgres_data:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
networks:
|
||||||
|
sharenet-network:
|
||||||
|
driver: bridge
|
Loading…
Add table
Reference in a new issue