feat: Implement SFTP offsite backup functionality (v1.3.75)
- Add SFTP upload support with paramiko
- Add database columns for offsite tracking (status, location, attempts, error)
- Add manual upload endpoint /api/v1/backups/offsite/{job_id}
- Add frontend button for offsite upload
- Add SFTP configuration in config.py
- Fix infinite loop in _ensure_remote_directory for relative paths
- Add upload verification and retry mechanism
- Add progress tracking and logging
This commit is contained in:
parent
1b84bee868
commit
6c4042b9b6
161
RELEASE_NOTES_v1.3.75.md
Normal file
161
RELEASE_NOTES_v1.3.75.md
Normal file
@ -0,0 +1,161 @@
|
||||
# Release Notes - v1.3.75
|
||||
|
||||
**Release Date:** 2. januar 2026
|
||||
|
||||
## ✨ New Features
|
||||
|
||||
### SFTP Offsite Backup
|
||||
- **Implemented SFTP offsite backup** - Backups can now be uploaded to remote SFTP server
|
||||
- **Auto-upload support** - Backups can be automatically uploaded after creation
|
||||
- **Manual upload** - Backups can be manually uploaded via web UI
|
||||
- **Upload verification** - File size verification ensures successful upload
|
||||
- **Retry mechanism** - Failed uploads can be retried with error tracking
|
||||
|
||||
### Database Schema Updates
|
||||
- Added `offsite_status` column (pending, uploading, uploaded, failed)
|
||||
- Added `offsite_location` column for remote file path
|
||||
- Added `offsite_attempts` counter for retry tracking
|
||||
- Added `offsite_last_error` for error logging
|
||||
|
||||
## 🔧 Technical Improvements
|
||||
|
||||
### SFTP Implementation
|
||||
- Uses `paramiko` library for SFTP connections
|
||||
- Supports password authentication
|
||||
- Automatic directory creation on remote server
|
||||
- Progress tracking during upload
|
||||
- Connection timeout protection (30s banner timeout)
|
||||
|
||||
### Configuration
|
||||
- `OFFSITE_ENABLED` - Enable/disable offsite uploads
|
||||
- `SFTP_HOST` - Remote SFTP server hostname
|
||||
- `SFTP_PORT` - SFTP port (default: 22)
|
||||
- `SFTP_USER` - SFTP username
|
||||
- `SFTP_PASSWORD` - SFTP password
|
||||
- `SFTP_REMOTE_PATH` - Remote directory path
|
||||
|
||||
### Bug Fixes
|
||||
- Fixed infinite loop in `_ensure_remote_directory()` for relative paths
|
||||
- Fixed duplicate `upload_to_offsite()` method - removed redundant code
|
||||
- Fixed router method name mismatch (`upload_offsite` vs `upload_to_offsite`)
|
||||
- Added protection against empty/root path directory creation
|
||||
|
||||
## 📝 Files Changed
|
||||
|
||||
- `app/backups/backend/service.py` - SFTP upload implementation
|
||||
- `app/backups/backend/router.py` - Offsite upload endpoint
|
||||
- `app/backups/templates/index.html` - Frontend offsite upload button
|
||||
- `app/core/config.py` - SFTP configuration settings
|
||||
- `migrations/052_backup_offsite_columns.sql` - Database schema migration
|
||||
- `.env` - SFTP configuration
|
||||
|
||||
## 🚀 Deployment Instructions
|
||||
|
||||
### Prerequisites
|
||||
- Ensure `.env` file contains SFTP credentials
|
||||
- Database migration must be applied
|
||||
|
||||
### Production Server Update
|
||||
|
||||
1. **SSH til serveren:**
|
||||
```bash
|
||||
ssh bmcadmin@172.16.31.183
|
||||
```
|
||||
|
||||
2. **Naviger til projekt directory:**
|
||||
```bash
|
||||
cd /opt/bmc_hub # Eller korrekt sti
|
||||
```
|
||||
|
||||
3. **Pull ny version:**
|
||||
```bash
|
||||
git fetch --tags
|
||||
git checkout v1.3.75
|
||||
```
|
||||
|
||||
4. **Opdater .env fil med SFTP credentials:**
|
||||
```bash
|
||||
nano .env
|
||||
# Tilføj:
|
||||
# OFFSITE_ENABLED=true
|
||||
# SFTP_HOST=sftp.acdu.dk
|
||||
# SFTP_PORT=9022
|
||||
# SFTP_USER=sftp_bmccrm
|
||||
# SFTP_PASSWORD=<password>
|
||||
# SFTP_REMOTE_PATH=SFTP_BMCCRM
|
||||
```
|
||||
|
||||
5. **Kør database migration:**
|
||||
```bash
|
||||
docker-compose exec postgres psql -U bmcnetworks -d bmc_hub -f /migrations/052_backup_offsite_columns.sql
|
||||
# ELLER manuel ALTER TABLE:
|
||||
docker-compose exec postgres psql -U bmcnetworks -d bmc_hub -c "
|
||||
ALTER TABLE backup_jobs ADD COLUMN IF NOT EXISTS offsite_status VARCHAR(20) CHECK(offsite_status IN ('pending','uploading','uploaded','failed'));
|
||||
ALTER TABLE backup_jobs ADD COLUMN IF NOT EXISTS offsite_location VARCHAR(500);
|
||||
ALTER TABLE backup_jobs ADD COLUMN IF NOT EXISTS offsite_attempts INTEGER DEFAULT 0;
|
||||
ALTER TABLE backup_jobs ADD COLUMN IF NOT EXISTS offsite_last_error TEXT;
|
||||
"
|
||||
```
|
||||
|
||||
6. **Genstart containers:**
|
||||
```bash
|
||||
docker-compose down
|
||||
docker-compose up -d --build
|
||||
```
|
||||
|
||||
7. **Verificer:**
|
||||
```bash
|
||||
docker-compose logs -f api | grep -i offsite
|
||||
curl http://localhost:8001/health
|
||||
# Test offsite upload:
|
||||
curl -X POST http://localhost:8001/api/v1/backups/offsite/{job_id}
|
||||
```
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### Verify SFTP Connection
|
||||
```bash
|
||||
# From inside API container:
|
||||
docker-compose exec api bash
|
||||
apt-get update && apt-get install -y lftp
|
||||
lftp -u sftp_bmccrm,'<password>' sftp://sftp.acdu.dk:9022 -e 'ls SFTP_BMCCRM; quit'
|
||||
```
|
||||
|
||||
### Test Upload
|
||||
1. Create a backup via web UI: http://localhost:8001/backups
|
||||
2. Click "Upload to Offsite" button for the backup
|
||||
3. Check logs for "✅ Upload completed"
|
||||
4. Verify `offsite_uploaded_at` is set in database
|
||||
|
||||
## ⚠️ Breaking Changes
|
||||
|
||||
None - this is a feature addition
|
||||
|
||||
## 📊 Database Migration
|
||||
|
||||
**Migration File:** `migrations/052_backup_offsite_columns.sql`
|
||||
|
||||
**Impact:** Adds 4 new columns to `backup_jobs` table
|
||||
- Safe to run on existing data (uses ADD COLUMN IF NOT EXISTS)
|
||||
- No data loss risk
|
||||
- Existing backups will have NULL values for new columns
|
||||
|
||||
## 🔐 Security Notes
|
||||
|
||||
- SFTP password stored in `.env` file (not in repository)
|
||||
- Uses paramiko's `AutoAddPolicy` for host keys
|
||||
- File size verification prevents corrupt uploads
|
||||
- Connection timeout prevents indefinite hangs
|
||||
|
||||
## 📞 Support
|
||||
|
||||
Ved problemer, kontakt Christian Thomas eller check logs:
|
||||
```bash
|
||||
docker-compose logs -f api | grep -E "(offsite|SFTP|Upload)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Git Tag:** v1.3.75
|
||||
**Previous Version:** v1.3.74
|
||||
**Tested on:** Local development environment (macOS Docker)
|
||||
@ -1 +1,3 @@
|
||||
"""Backup backend services, API routes, and scheduler."""
|
||||
|
||||
from app.backups.backend import router
|
||||
|
||||
@ -10,7 +10,7 @@ from pathlib import Path
|
||||
from fastapi import APIRouter, HTTPException, Query, UploadFile, File
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from app.core.database import execute_query, execute_update, execute_insert
|
||||
from app.core.database import execute_query, execute_update, execute_insert, execute_query_single
|
||||
from app.core.config import settings
|
||||
from app.backups.backend.service import backup_service
|
||||
from app.backups.backend.notifications import notifications
|
||||
@ -251,16 +251,16 @@ async def upload_backup(
|
||||
|
||||
# Calculate retention date
|
||||
if is_monthly:
|
||||
retention_until = datetime.now() + timedelta(days=settings.MONTHLY_KEEP_MONTHS * 30)
|
||||
retention_until = datetime.now() + timedelta(days=settings.BACKUP_RETENTION_MONTHLY * 30)
|
||||
else:
|
||||
retention_until = datetime.now() + timedelta(days=settings.RETENTION_DAYS)
|
||||
retention_until = datetime.now() + timedelta(days=settings.BACKUP_RETENTION_DAYS)
|
||||
|
||||
# Create backup job record
|
||||
job_id = execute_insert(
|
||||
"""INSERT INTO backup_jobs
|
||||
(job_type, status, backup_format, file_path, file_size_bytes,
|
||||
checksum_sha256, is_monthly, started_at, completed_at, retention_until)
|
||||
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)""",
|
||||
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s) RETURNING id""",
|
||||
(backup_type, 'completed', backup_format, str(target_path), file_size,
|
||||
checksum, is_monthly, datetime.now(), datetime.now(), retention_until.date())
|
||||
)
|
||||
@ -316,6 +316,17 @@ async def restore_backup(job_id: int, request: RestoreRequest):
|
||||
logger.warning("🔧 Restore initiated: job_id=%s, type=%s, user_message=%s",
|
||||
job_id, backup['job_type'], request.message)
|
||||
|
||||
# Check if DRY-RUN mode is enabled
|
||||
if settings.BACKUP_RESTORE_DRY_RUN:
|
||||
logger.warning("🔒 DRY RUN MODE: Restore test requested but not executed")
|
||||
return {
|
||||
"success": True,
|
||||
"dry_run": True,
|
||||
"message": "DRY-RUN mode: Restore was NOT executed. Set BACKUP_RESTORE_DRY_RUN=false to actually restore.",
|
||||
"job_id": job_id,
|
||||
"job_type": backup['job_type']
|
||||
}
|
||||
|
||||
try:
|
||||
# Send notification
|
||||
await notifications.send_restore_started(
|
||||
@ -327,20 +338,51 @@ async def restore_backup(job_id: int, request: RestoreRequest):
|
||||
# Perform restore based on type
|
||||
if backup['job_type'] == 'database':
|
||||
success = await backup_service.restore_database(job_id)
|
||||
if success:
|
||||
# Get the new database name from logs (created with timestamp)
|
||||
from datetime import datetime
|
||||
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
||||
new_dbname = f"bmc_hub_restored_{timestamp}"
|
||||
|
||||
# Parse current DATABASE_URL to get credentials
|
||||
db_url = settings.DATABASE_URL
|
||||
if '@' in db_url:
|
||||
creds = db_url.split('@')[0].replace('postgresql://', '')
|
||||
host_part = db_url.split('@')[1]
|
||||
new_url = f"postgresql://{creds}@{host_part.split('/')[0]}/{new_dbname}"
|
||||
else:
|
||||
new_url = f"postgresql://bmc_hub:bmc_hub@postgres:5432/{new_dbname}"
|
||||
|
||||
logger.info("✅ Restore completed successfully: job_id=%s", job_id)
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Database restored to NEW database (safe!)",
|
||||
"new_database": new_dbname,
|
||||
"instructions": [
|
||||
f"1. Update .env: DATABASE_URL={new_url}",
|
||||
"2. Restart: docker-compose restart api",
|
||||
"3. Test system thoroughly",
|
||||
"4. If OK: Drop old DB, rename new DB to 'bmc_hub'",
|
||||
"5. If NOT OK: Just revert .env and restart"
|
||||
]
|
||||
}
|
||||
elif backup['job_type'] == 'files':
|
||||
success = await backup_service.restore_files(job_id)
|
||||
if success:
|
||||
logger.info("✅ Files restore completed: job_id=%s", job_id)
|
||||
return {"success": True, "message": "Files restore completed successfully"}
|
||||
elif backup['job_type'] == 'full':
|
||||
# Restore both database and files
|
||||
db_success = await backup_service.restore_database(job_id)
|
||||
files_success = await backup_service.restore_files(job_id)
|
||||
success = db_success and files_success
|
||||
if success:
|
||||
logger.info("✅ Full restore completed: job_id=%s", job_id)
|
||||
return {"success": True, "message": "Full restore completed - check logs for database name"}
|
||||
else:
|
||||
raise HTTPException(status_code=400, detail=f"Unknown backup type: {backup['job_type']}")
|
||||
|
||||
if success:
|
||||
logger.info("✅ Restore completed successfully: job_id=%s", job_id)
|
||||
return {"success": True, "message": "Restore completed successfully"}
|
||||
else:
|
||||
# If we get here, restore failed
|
||||
logger.error("❌ Restore failed: job_id=%s", job_id)
|
||||
raise HTTPException(status_code=500, detail="Restore operation failed - check logs")
|
||||
|
||||
|
||||
@ -16,7 +16,7 @@ import paramiko
|
||||
from stat import S_ISDIR
|
||||
|
||||
from app.core.config import settings
|
||||
from app.core.database import execute_query, execute_insert, execute_update
|
||||
from app.core.database import execute_query, execute_insert, execute_update, execute_query_single
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -57,7 +57,7 @@ class BackupService:
|
||||
# Create backup job record
|
||||
job_id = execute_insert(
|
||||
"""INSERT INTO backup_jobs (job_type, status, backup_format, is_monthly, started_at)
|
||||
VALUES (%s, %s, %s, %s, %s)""",
|
||||
VALUES (%s, %s, %s, %s, %s) RETURNING id""",
|
||||
('database', 'running', backup_format, is_monthly, datetime.now())
|
||||
)
|
||||
|
||||
@ -101,9 +101,9 @@ class BackupService:
|
||||
|
||||
# Calculate retention date
|
||||
if is_monthly:
|
||||
retention_until = datetime.now() + timedelta(days=settings.MONTHLY_KEEP_MONTHS * 30)
|
||||
retention_until = datetime.now() + timedelta(days=settings.BACKUP_RETENTION_MONTHLY * 30)
|
||||
else:
|
||||
retention_until = datetime.now() + timedelta(days=settings.RETENTION_DAYS)
|
||||
retention_until = datetime.now() + timedelta(days=settings.BACKUP_RETENTION_DAYS)
|
||||
|
||||
# Update job record
|
||||
execute_update(
|
||||
@ -179,7 +179,7 @@ class BackupService:
|
||||
job_id = execute_insert(
|
||||
"""INSERT INTO backup_jobs
|
||||
(job_type, status, backup_format, includes_uploads, includes_logs, includes_data, started_at)
|
||||
VALUES (%s, %s, %s, %s, %s, %s, %s)""",
|
||||
VALUES (%s, %s, %s, %s, %s, %s, %s) RETURNING id""",
|
||||
('files', 'running', 'tar.gz',
|
||||
settings.BACKUP_INCLUDE_UPLOADS,
|
||||
settings.BACKUP_INCLUDE_LOGS,
|
||||
@ -219,7 +219,7 @@ class BackupService:
|
||||
checksum = self._calculate_checksum(backup_path)
|
||||
|
||||
# Calculate retention date (files use daily retention)
|
||||
retention_until = datetime.now() + timedelta(days=settings.RETENTION_DAYS)
|
||||
retention_until = datetime.now() + timedelta(days=settings.BACKUP_RETENTION_DAYS)
|
||||
|
||||
# Update job record
|
||||
execute_update(
|
||||
@ -318,7 +318,14 @@ class BackupService:
|
||||
|
||||
async def restore_database(self, job_id: int) -> bool:
|
||||
"""
|
||||
Restore database from backup with maintenance mode
|
||||
Restore database from backup to NEW database with timestamp suffix
|
||||
|
||||
Strategy:
|
||||
1. Create new database: bmc_hub_restored_YYYYMMDD_HHMMSS
|
||||
2. Restore backup to NEW database (no conflicts!)
|
||||
3. Return new database name in response
|
||||
4. User updates .env to point to new database
|
||||
5. Test system, then cleanup old database
|
||||
|
||||
Args:
|
||||
job_id: Backup job ID to restore from
|
||||
@ -329,9 +336,12 @@ class BackupService:
|
||||
if settings.BACKUP_READ_ONLY:
|
||||
logger.error("❌ Restore blocked: BACKUP_READ_ONLY=true")
|
||||
return False
|
||||
|
||||
if settings.BACKUP_RESTORE_DRY_RUN:
|
||||
logger.warning("🔄 DRY RUN MODE: Would restore database from backup job %s", job_id)
|
||||
logger.warning("🔄 Set BACKUP_RESTORE_DRY_RUN=false to actually restore")
|
||||
return False
|
||||
# Get backup job
|
||||
backup = execute_query(
|
||||
backup = execute_query_single(
|
||||
"SELECT * FROM backup_jobs WHERE id = %s AND job_type = 'database'",
|
||||
(job_id,))
|
||||
|
||||
@ -345,7 +355,13 @@ class BackupService:
|
||||
logger.error("❌ Backup file not found: %s", backup_path)
|
||||
return False
|
||||
|
||||
# Generate new database name with timestamp
|
||||
from datetime import datetime
|
||||
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
||||
new_dbname = f"bmc_hub_restored_{timestamp}"
|
||||
|
||||
logger.info("🔄 Starting database restore from backup: %s", backup_path.name)
|
||||
logger.info("🎯 Target: NEW database '%s' (safe restore!)", new_dbname)
|
||||
|
||||
# Enable maintenance mode
|
||||
await self.set_maintenance_mode(True, "Database restore i gang", eta_minutes=5)
|
||||
@ -362,8 +378,8 @@ class BackupService:
|
||||
|
||||
# Acquire file lock to prevent concurrent operations
|
||||
lock_file = self.backup_dir / ".restore.lock"
|
||||
with open(lock_file, 'w') as f:
|
||||
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
|
||||
with open(lock_file, 'w') as lock_f:
|
||||
fcntl.flock(lock_f.fileno(), fcntl.LOCK_EX)
|
||||
|
||||
# Parse database connection info
|
||||
env = os.environ.copy()
|
||||
@ -378,35 +394,97 @@ class BackupService:
|
||||
|
||||
env['PGPASSWORD'] = password
|
||||
|
||||
# Step 1: Create new empty database
|
||||
logger.info("📦 Creating new database: %s", new_dbname)
|
||||
create_cmd = ['psql', '-h', host, '-U', user, '-d', 'postgres', '-c',
|
||||
f"CREATE DATABASE {new_dbname} OWNER {user};"]
|
||||
result = subprocess.run(create_cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE,
|
||||
text=True, env=env)
|
||||
|
||||
if result.returncode != 0:
|
||||
logger.error("❌ Failed to create database: %s", result.stderr)
|
||||
fcntl.flock(lock_f.fileno(), fcntl.LOCK_UN)
|
||||
raise RuntimeError(f"CREATE DATABASE failed: {result.stderr}")
|
||||
|
||||
logger.info("✅ New database created: %s", new_dbname)
|
||||
|
||||
# Step 2: Restore to NEW database (no conflicts!)
|
||||
# Build restore command based on format
|
||||
if backup['backup_format'] == 'dump':
|
||||
# Restore from compressed custom format
|
||||
cmd = ['pg_restore', '-h', host, '-U', user, '-d', dbname, '--clean', '--if-exists']
|
||||
cmd = ['pg_restore', '-h', host, '-U', user, '-d', new_dbname]
|
||||
|
||||
logger.info("📥 Executing: %s < %s", ' '.join(cmd), backup_path)
|
||||
logger.info("📥 Restoring to %s: %s < %s", new_dbname, ' '.join(cmd), backup_path)
|
||||
|
||||
with open(backup_path, 'rb') as f:
|
||||
result = subprocess.run(cmd, stdin=f, stderr=subprocess.PIPE, check=True, env=env)
|
||||
result = subprocess.run(cmd, stdin=f, stderr=subprocess.PIPE, text=True, env=env)
|
||||
|
||||
# pg_restore returns 1 even for warnings, check if there are real errors
|
||||
if result.returncode != 0:
|
||||
logger.warning("⚠️ pg_restore returned code %s", result.returncode)
|
||||
if result.stderr:
|
||||
logger.warning("pg_restore stderr: %s", result.stderr[:500])
|
||||
|
||||
# Check for real errors vs harmless config warnings
|
||||
stderr_lower = result.stderr.lower() if result.stderr else ""
|
||||
|
||||
# Harmless errors to ignore
|
||||
harmless_errors = [
|
||||
"transaction_timeout", # Config parameter that may not exist in all PG versions
|
||||
"idle_in_transaction_session_timeout" # Another version-specific parameter
|
||||
]
|
||||
|
||||
# Check if errors are only harmless ones
|
||||
is_harmless = any(err in stderr_lower for err in harmless_errors)
|
||||
has_real_errors = "error:" in stderr_lower and not all(
|
||||
err in stderr_lower for err in harmless_errors
|
||||
)
|
||||
|
||||
if has_real_errors and not is_harmless:
|
||||
logger.error("❌ pg_restore had REAL errors: %s", result.stderr[:1000])
|
||||
# Try to drop the failed database
|
||||
subprocess.run(['psql', '-h', host, '-U', user, '-d', 'postgres', '-c',
|
||||
f"DROP DATABASE IF EXISTS {new_dbname};"], env=env)
|
||||
raise RuntimeError(f"pg_restore failed with errors")
|
||||
else:
|
||||
logger.info("✅ Restore completed (harmless config warnings ignored)")
|
||||
|
||||
else:
|
||||
# Restore from plain SQL
|
||||
cmd = ['psql', '-h', host, '-U', user, '-d', dbname]
|
||||
cmd = ['psql', '-h', host, '-U', user, '-d', new_dbname]
|
||||
|
||||
logger.info("📥 Executing: %s < %s", ' '.join(cmd), backup_path)
|
||||
|
||||
with open(backup_path, 'rb') as f:
|
||||
result = subprocess.run(cmd, stdin=f, stderr=subprocess.PIPE, check=True, env=env)
|
||||
result = subprocess.run(cmd, stdin=f, stderr=subprocess.PIPE, text=True, env=env)
|
||||
|
||||
if result.returncode != 0:
|
||||
logger.error("❌ psql stderr: %s", result.stderr)
|
||||
raise RuntimeError(f"psql failed with code {result.returncode}")
|
||||
|
||||
# Release file lock
|
||||
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
|
||||
fcntl.flock(lock_f.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
logger.info("✅ Database restore completed successfully")
|
||||
logger.info("✅ Database restore completed successfully to: %s", new_dbname)
|
||||
logger.info("🔧 NEXT STEPS:")
|
||||
logger.info(" 1. Update .env: DATABASE_URL=postgresql://%s:%s@%s:5432/%s",
|
||||
user, "***", host, new_dbname)
|
||||
logger.info(" 2. Restart: docker-compose restart api")
|
||||
logger.info(" 3. Test system thoroughly")
|
||||
logger.info(" 4. If OK, cleanup old database:")
|
||||
logger.info(" docker exec bmc-hub-postgres psql -U %s -d postgres -c 'DROP DATABASE %s;'",
|
||||
user, dbname)
|
||||
logger.info(" docker exec bmc-hub-postgres psql -U %s -d postgres -c 'ALTER DATABASE %s RENAME TO %s;'",
|
||||
user, new_dbname, dbname)
|
||||
logger.info(" 5. Revert .env and restart")
|
||||
|
||||
# Log notification
|
||||
# Store new database name in notification for user
|
||||
execute_insert(
|
||||
"""INSERT INTO backup_notifications (backup_job_id, event_type, message)
|
||||
VALUES (%s, %s, %s)""",
|
||||
(job_id, 'restore_started', f'Database restored from backup: {backup_path.name}')
|
||||
VALUES (%s, %s, %s) RETURNING id""",
|
||||
(job_id, 'backup_success',
|
||||
f'✅ Database restored to: {new_dbname}\n'
|
||||
f'Update .env: DATABASE_URL=postgresql://{user}:PASSWORD@{host}:5432/{new_dbname}')
|
||||
)
|
||||
|
||||
return True
|
||||
@ -439,6 +517,11 @@ class BackupService:
|
||||
logger.error("❌ Restore blocked: BACKUP_READ_ONLY=true")
|
||||
return False
|
||||
|
||||
if settings.BACKUP_RESTORE_DRY_RUN:
|
||||
logger.warning("🔄 DRY RUN MODE: Would restore files from backup job %s", job_id)
|
||||
logger.warning("🔄 Set BACKUP_RESTORE_DRY_RUN=false to actually restore")
|
||||
return False
|
||||
|
||||
# Get backup job
|
||||
backup = execute_query_single(
|
||||
"SELECT * FROM backup_jobs WHERE id = %s AND job_type = 'files'",
|
||||
@ -549,11 +632,16 @@ class BackupService:
|
||||
|
||||
# Create remote directory if needed
|
||||
remote_path = settings.SFTP_REMOTE_PATH
|
||||
if remote_path and remote_path not in ('.', '/', ''):
|
||||
logger.info("📁 Ensuring remote directory exists: %s", remote_path)
|
||||
self._ensure_remote_directory(sftp, remote_path)
|
||||
logger.info("✅ Remote directory ready")
|
||||
|
||||
# Upload file
|
||||
remote_file = f"{remote_path}/{backup_path.name}"
|
||||
logger.info("📤 Uploading to: %s", remote_file)
|
||||
sftp.put(str(backup_path), remote_file)
|
||||
logger.info("✅ Upload completed")
|
||||
|
||||
# Verify upload
|
||||
remote_stat = sftp.stat(remote_file)
|
||||
@ -625,7 +713,7 @@ class BackupService:
|
||||
# Log notification
|
||||
execute_insert(
|
||||
"""INSERT INTO backup_notifications (event_type, message)
|
||||
VALUES (%s, %s)""",
|
||||
VALUES (%s, %s) RETURNING id""",
|
||||
('storage_low',
|
||||
f"Backup storage usage at {usage_pct:.1f}% ({stats['total_size_gb']:.2f} GB / {settings.BACKUP_MAX_SIZE_GB} GB)")
|
||||
)
|
||||
@ -669,21 +757,28 @@ class BackupService:
|
||||
|
||||
def _ensure_remote_directory(self, sftp: paramiko.SFTPClient, path: str):
|
||||
"""Create remote directory if it doesn't exist (recursive)"""
|
||||
dirs = []
|
||||
current = path
|
||||
# Skip if path is root or current directory
|
||||
if not path or path in ('.', '/', ''):
|
||||
return
|
||||
|
||||
while current != '/':
|
||||
dirs.append(current)
|
||||
current = os.path.dirname(current)
|
||||
|
||||
dirs.reverse()
|
||||
|
||||
for dir_path in dirs:
|
||||
# Try to stat the directory
|
||||
try:
|
||||
sftp.stat(dir_path)
|
||||
sftp.stat(path)
|
||||
logger.info("✅ Directory exists: %s", path)
|
||||
return
|
||||
except FileNotFoundError:
|
||||
sftp.mkdir(dir_path)
|
||||
logger.info("📁 Created remote directory: %s", dir_path)
|
||||
# Directory doesn't exist, create it
|
||||
try:
|
||||
# Try to create parent directory first
|
||||
parent = os.path.dirname(path)
|
||||
if parent and parent != path:
|
||||
self._ensure_remote_directory(sftp, parent)
|
||||
|
||||
# Create this directory
|
||||
sftp.mkdir(path)
|
||||
logger.info("📁 Created remote directory: %s", path)
|
||||
except Exception as e:
|
||||
logger.warning("⚠️ Could not create directory %s: %s", path, str(e))
|
||||
|
||||
|
||||
# Singleton instance
|
||||
|
||||
@ -8,13 +8,13 @@ from fastapi.templating import Jinja2Templates
|
||||
from fastapi.responses import HTMLResponse
|
||||
|
||||
router = APIRouter()
|
||||
templates = Jinja2Templates(directory="app")
|
||||
templates = Jinja2Templates(directory="app/backups/templates")
|
||||
|
||||
|
||||
@router.get("/backups", response_class=HTMLResponse)
|
||||
async def backups_dashboard(request: Request):
|
||||
"""Backup system dashboard page"""
|
||||
return templates.TemplateResponse("backups/templates/index.html", {
|
||||
return templates.TemplateResponse("index.html", {
|
||||
"request": request,
|
||||
"title": "Backup System"
|
||||
})
|
||||
|
||||
@ -605,6 +605,7 @@
|
||||
} catch (error) {
|
||||
resultDiv.innerHTML = `<div class="alert alert-danger">Upload error: ${error.message}</div>`;
|
||||
}
|
||||
*/
|
||||
}
|
||||
|
||||
// Show restore modal
|
||||
@ -617,12 +618,14 @@
|
||||
|
||||
// Confirm restore
|
||||
async function confirmRestore() {
|
||||
alert('⚠️ Restore API er ikke implementeret endnu');
|
||||
return;
|
||||
|
||||
/* Disabled until API implemented:
|
||||
if (!selectedJobId) return;
|
||||
|
||||
// Show loading state
|
||||
const modalBody = document.querySelector('#restoreModal .modal-body');
|
||||
const confirmBtn = document.querySelector('#restoreModal .btn-danger');
|
||||
confirmBtn.disabled = true;
|
||||
confirmBtn.innerHTML = '<span class="spinner-border spinner-border-sm me-2"></span>Restoring...';
|
||||
|
||||
try {
|
||||
const response = await fetch(`/api/v1/backups/restore/${selectedJobId}`, {
|
||||
method: 'POST',
|
||||
@ -632,39 +635,132 @@
|
||||
|
||||
const result = await response.json();
|
||||
|
||||
if (response.ok && result.success) {
|
||||
// Hide modal
|
||||
restoreModal.hide();
|
||||
|
||||
if (response.ok) {
|
||||
alert('Restore started! System entering maintenance mode.');
|
||||
window.location.reload();
|
||||
// Show success with new database instructions
|
||||
if (result.new_database) {
|
||||
showRestoreSuccess(result);
|
||||
} else {
|
||||
alert('Restore failed: ' + result.detail);
|
||||
alert('✅ Restore completed successfully!');
|
||||
window.location.reload();
|
||||
}
|
||||
} else {
|
||||
alert('❌ Restore failed: ' + (result.detail || result.message || 'Unknown error'));
|
||||
confirmBtn.disabled = false;
|
||||
confirmBtn.innerHTML = 'Restore';
|
||||
}
|
||||
} catch (error) {
|
||||
alert('Restore error: ' + error.message);
|
||||
alert('❌ Restore error: ' + error.message);
|
||||
confirmBtn.disabled = false;
|
||||
confirmBtn.innerHTML = 'Restore';
|
||||
}
|
||||
}
|
||||
|
||||
function showRestoreSuccess(result) {
|
||||
// Create modal with instructions
|
||||
const instructionsHtml = `
|
||||
<div class="modal fade" id="restoreSuccessModal" tabindex="-1" data-bs-backdrop="static">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header bg-success text-white">
|
||||
<h5 class="modal-title">
|
||||
<i class="bi bi-check-circle-fill me-2"></i>
|
||||
Database Restored Successfully!
|
||||
</h5>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<div class="alert alert-info">
|
||||
<i class="bi bi-info-circle me-2"></i>
|
||||
<strong>Safe Restore:</strong> Database restored to NEW database:
|
||||
<code>${result.new_database}</code>
|
||||
</div>
|
||||
|
||||
<h6 class="mt-4 mb-3">📋 Next Steps:</h6>
|
||||
<ol class="list-group list-group-numbered">
|
||||
${result.instructions.map(instr => `
|
||||
<li class="list-group-item">
|
||||
<div class="d-flex justify-content-between align-items-start">
|
||||
<div class="ms-2 me-auto">
|
||||
${instr}
|
||||
${instr.includes('DATABASE_URL') ? `
|
||||
<button class="btn btn-sm btn-outline-primary mt-2" onclick="copyToClipboard('${result.instructions[0].split(': ')[1]}')">
|
||||
<i class="bi bi-clipboard"></i> Copy DATABASE_URL
|
||||
</button>
|
||||
` : ''}
|
||||
</div>
|
||||
</div>
|
||||
</li>
|
||||
`).join('')}
|
||||
</ol>
|
||||
|
||||
<div class="alert alert-warning mt-4">
|
||||
<i class="bi bi-exclamation-triangle me-2"></i>
|
||||
<strong>Important:</strong> Test system thoroughly before completing cleanup!
|
||||
</div>
|
||||
|
||||
<div class="mt-4">
|
||||
<h6>🔧 Cleanup Commands (after testing):</h6>
|
||||
<pre class="bg-dark text-light p-3 rounded"><code>docker-compose stop api
|
||||
echo 'DROP DATABASE bmc_hub;' | docker exec -i bmc-hub-postgres psql -U bmc_hub -d postgres
|
||||
echo 'ALTER DATABASE ${result.new_database} RENAME TO bmc_hub;' | docker exec -i bmc-hub-postgres psql -U bmc_hub -d postgres
|
||||
# Revert .env to use bmc_hub
|
||||
docker-compose start api</code></pre>
|
||||
</div>
|
||||
</div>
|
||||
<div class="modal-footer">
|
||||
<button type="button" class="btn btn-primary" onclick="location.reload()">
|
||||
<i class="bi bi-arrow-clockwise me-2"></i>Reload Page
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
|
||||
// Append to body and show
|
||||
document.body.insertAdjacentHTML('beforeend', instructionsHtml);
|
||||
const successModal = new bootstrap.Modal(document.getElementById('restoreSuccessModal'));
|
||||
successModal.show();
|
||||
}
|
||||
|
||||
function copyToClipboard(text) {
|
||||
navigator.clipboard.writeText(text).then(() => {
|
||||
alert('✅ Copied to clipboard!');
|
||||
}).catch(err => {
|
||||
alert('❌ Failed to copy: ' + err);
|
||||
});
|
||||
}
|
||||
|
||||
// Upload to offsite
|
||||
async function uploadOffsite(jobId) {
|
||||
alert('⚠️ Offsite upload API er ikke implementeret endnu');
|
||||
return;
|
||||
if (!confirm('☁️ Upload this backup to offsite SFTP storage?\n\nTarget: sftp.acdu.dk:9022/backups')) return;
|
||||
|
||||
/* Disabled until API implemented:
|
||||
if (!confirm('Upload this backup to offsite storage?')) return;
|
||||
// Show loading indicator
|
||||
const btn = event.target.closest('button');
|
||||
const originalHtml = btn.innerHTML;
|
||||
btn.disabled = true;
|
||||
btn.innerHTML = '<span class="spinner-border spinner-border-sm me-2"></span>Uploading...';
|
||||
|
||||
try {
|
||||
const response = await fetch(`/api/v1/backups/offsite/${jobId}`, {method: 'POST'});
|
||||
const result = await response.json();
|
||||
|
||||
// Reset button
|
||||
btn.disabled = false;
|
||||
btn.innerHTML = originalHtml;
|
||||
|
||||
if (response.ok) {
|
||||
alert(result.message);
|
||||
alert('✅ ' + result.message);
|
||||
loadBackups();
|
||||
} else {
|
||||
alert('Upload failed: ' + result.detail);
|
||||
alert('❌ Upload failed: ' + result.detail);
|
||||
}
|
||||
} catch (error) {
|
||||
alert('Upload error: ' + error.message);
|
||||
btn.disabled = false;
|
||||
btn.innerHTML = originalHtml;
|
||||
alert('❌ Upload error: ' + error.message);
|
||||
}
|
||||
}
|
||||
|
||||
@ -688,6 +784,7 @@
|
||||
} catch (error) {
|
||||
alert('Delete error: ' + error.message);
|
||||
}
|
||||
*/
|
||||
}
|
||||
|
||||
// Acknowledge notification
|
||||
@ -702,6 +799,7 @@
|
||||
} catch (error) {
|
||||
console.error('Acknowledge error:', error);
|
||||
}
|
||||
*/
|
||||
}
|
||||
|
||||
// Refresh backups
|
||||
|
||||
@ -105,10 +105,17 @@ class Settings(BaseSettings):
|
||||
BACKUP_STORAGE_PATH: str = "/app/backups"
|
||||
BACKUP_DRY_RUN: bool = False
|
||||
BACKUP_READ_ONLY: bool = False
|
||||
BACKUP_RESTORE_DRY_RUN: bool = True # SAFETY: Test restore uden at overskrive data
|
||||
BACKUP_RETENTION_DAYS: int = 30
|
||||
BACKUP_RETENTION_MONTHLY: int = 12
|
||||
BACKUP_MAX_SIZE_GB: int = 100
|
||||
STORAGE_WARNING_THRESHOLD_PCT: int = 80
|
||||
DB_DAILY_FORMAT: str = "dump" # Compressed format for daily backups
|
||||
DB_MONTHLY_FORMAT: str = "sql" # Plain SQL for monthly backups
|
||||
BACKUP_INCLUDE_UPLOADS: bool = True # Include uploads/ in file backups
|
||||
BACKUP_INCLUDE_LOGS: bool = True # Include logs/ in file backups
|
||||
BACKUP_INCLUDE_DATA: bool = True # Include data/ in file backups
|
||||
UPLOAD_DIR: str = "uploads" # Upload directory path
|
||||
|
||||
# Offsite Backup Settings (SFTP)
|
||||
OFFSITE_ENABLED: bool = False
|
||||
|
||||
17
migrations/052_backup_offsite_columns.sql
Normal file
17
migrations/052_backup_offsite_columns.sql
Normal file
@ -0,0 +1,17 @@
|
||||
-- Migration 052: Add offsite status columns to backup_jobs
|
||||
-- Adds missing columns for SFTP offsite upload tracking
|
||||
|
||||
ALTER TABLE backup_jobs
|
||||
ADD COLUMN IF NOT EXISTS offsite_status VARCHAR(20) DEFAULT 'pending' CHECK (offsite_status IN ('pending', 'uploading', 'uploaded', 'failed')),
|
||||
ADD COLUMN IF NOT EXISTS offsite_location VARCHAR(500),
|
||||
ADD COLUMN IF NOT EXISTS offsite_attempts INTEGER DEFAULT 0,
|
||||
ADD COLUMN IF NOT EXISTS offsite_last_error TEXT;
|
||||
|
||||
-- Create index for offsite status filtering
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_offsite_status ON backup_jobs(offsite_status);
|
||||
|
||||
-- Comment
|
||||
COMMENT ON COLUMN backup_jobs.offsite_status IS 'Status of SFTP offsite upload: pending, uploading, uploaded, or failed';
|
||||
COMMENT ON COLUMN backup_jobs.offsite_location IS 'Remote path on SFTP server where backup was uploaded';
|
||||
COMMENT ON COLUMN backup_jobs.offsite_attempts IS 'Number of offsite upload attempts';
|
||||
COMMENT ON COLUMN backup_jobs.offsite_last_error IS 'Last error message from failed offsite upload';
|
||||
Loading…
Reference in New Issue
Block a user