Python: Backup files to Google Drive using API

Purpose:
Backing up a file from a hosting server to google drive as a contingency plan.

Prerequisites:
Google service account key. (Create a Google Service Account and an Key for API access)

Step 1.) Install python and necessary packages.

pip install --upgrade google-api-python-client google-auth-httplib2 google-auth

Step 2.) Set Variables
SCRIPT_DIRECTORY = “/home/user/docker_planka”
SERVICE_ACCOUNT_FILE = “/home/user/GoogleDrive-serviceAccount.json”
DRIVE_FOLDER_ID = “googleDriveFolderID”
FILE_SUFFIX = “filename.tgz” (Used to detect file for upload and deletion)

The script will upload the single file and remove the old file as long as the name’s are no identical.

import os
import glob
from googleapiclient.discovery import build
from googleapiclient.http import MediaFileUpload
from google.oauth2 import service_account

# ====== CONFIGURATION =====
# Change working directory for cron compatibility
SCRIPT_DIRECTORY = "/home/user/docker_planka"
os.chdir(SCRIPT_DIRECTORY)

# Google Service Account Key
SERVICE_ACCOUNT_FILE = "/home/user/GoogleDrive-serviceAccount.json"

# Google Drive folder ID
DRIVE_FOLDER_ID = "1NXqKQi69mOx3FpgnXmcdjhZfjE-xQfaL"

# File name's ending (suffix)
FILE_SUFFIX = "planka.tgz"

SCOPES = ['https://www.googleapis.com/auth/drive.file']

def get_latest_backup_file(suffix):
    files = glob.glob(f"*{suffix}")
    if not files:
        raise FileNotFoundError(f"No files matching '*{suffix}' found in current directory.")
    latest_file = max(files, key=os.path.getmtime)
    return latest_file

def upload_to_drive(service, local_file_path, drive_folder_id):
    file_metadata = {'name': os.path.basename(local_file_path)}
    if drive_folder_id:
        file_metadata['parents'] = [drive_folder_id]
    media = MediaFileUpload(local_file_path, resumable=True)
    file = service.files().create(
        body=file_metadata,
        media_body=media,
        fields='id, name'
    ).execute()
    print(f"Uploaded '{local_file_path}' as '{file.get('name')}' (ID: {file.get('id')})")
    return file.get('id'), file.get('name')

def delete_old_backups_in_drive(service, folder_id, suffix, exclude_name=None):
    query = f"'{folder_id}' in parents and name contains '{suffix}' and trashed = false"
    results = service.files().list(q=query, fields="files(id, name)").execute()
    files = results.get('files', [])
    for file in files:
        if file['name'] != exclude_name:
            print(f"Deleting old backup '{file['name']}' (ID: {file['id']}) from Google Drive.")
            service.files().delete(fileId=file['id']).execute()

def main():
    credentials = service_account.Credentials.from_service_account_file(
        SERVICE_ACCOUNT_FILE, scopes=SCOPES
    )
    service = build('drive', 'v3', credentials=credentials)

    # Step 1: Find latest backup file
    backup_file = get_latest_backup_file(FILE_SUFFIX)

    # Step 2: Upload the latest backup
    uploaded_file_id, uploaded_file_name = upload_to_drive(service, backup_file, DRIVE_FOLDER_ID)

    # Step 3: Delete old backups from Drive, except the just-uploaded one
    delete_old_backups_in_drive(service, DRIVE_FOLDER_ID, FILE_SUFFIX, exclude_name=uploaded_file_name)

if __name__ == "__main__":
    main()

Docker Planka: Backup database and files

If you have Planka running Dockerized, you are likely using the docker-backup.sh which is provided by https://github.com/plankanban/planka

Changes from the original script:
– Docker swarm compatibility (Filters container name)
– CD into script’s directory (For when you use this with cron tasks)
– Removed UTC date format to raw date output from system
– Deletes any old .tgz files after creating the new backup

Variables to set:
SCRIPT_DIRECTORY=”/script/location”
BACKUP_DATETIME=$(date +%Y-%m%d-%H%M)
BACKUP_TMPDIR=$(mktemp -d planka-backup-XXXXXX)
BACKUP_FILENAME=”${BACKUP_DATETIME}-planka.tgz”
OLD_BACKUP_FILES=(*-planka.tgz)

#!/bin/bash

set -e

# Location for this script
SCRIPT_DIRECTORY="/home/btro/docker_planka"

# CD to script's directory (for cron compatibility)
cd "${SCRIPT_DIRECTORY}"

# Other environment variables to set
BACKUP_DATETIME=$(date +%Y-%m%d-%H%M)
BACKUP_TMPDIR=$(mktemp -d planka-backup-XXXXXX)
BACKUP_FILENAME="${BACKUP_DATETIME}-planka.tgz"
OLD_BACKUP_FILES=(*-planka.tgz)

PLANKA_DOCKER_CONTAINER_POSTGRES=$(docker ps --filter "name=^planka_postgres" --format "{{.Names}}" | head -n1)
PLANKA_DOCKER_CONTAINER_PLANKA=$(docker ps --filter "name=^planka_planka" --format "{{.Names}}" | head -n1)

if [[ -z "$PLANKA_DOCKER_CONTAINER_POSTGRES" ]]; then
    echo "Error: No running planka_postgres container found!"
    exit 1
fi
if [[ -z "$PLANKA_DOCKER_CONTAINER_PLANKA" ]]; then
    echo "Error: No running planka_planka container found!"
    exit 1
fi

# Export DB
echo -n "Exporting postgres database ... "
docker exec -t "$PLANKA_DOCKER_CONTAINER_POSTGRES" pg_dumpall -c -U postgres > "$BACKUP_TMPDIR/postgres.sql"
echo "Success!"

# Export Docker volumes
for item in favicons user-avatars background-images attachments; do
    # Source path
    SRC="/app/public/$item"
    [[ "$item" = "attachments" ]] && SRC="/app/private/attachments"
    echo -n "Exporting $item ... "
    docker run --rm --user $(id -u):$(id -g) \
        --volumes-from "$PLANKA_DOCKER_CONTAINER_PLANKA" \
        -v "$(pwd)/$BACKUP_TMPDIR:/backup" ubuntu \
        bash -c "[ -d $SRC ] && cp -r $SRC /backup/$item || echo 'No $item to backup.'"
    echo "Done!"
done

# Fix permissions (optional but robust)
chown -R $(id -u):$(id -g) "$BACKUP_TMPDIR"

# Create final archive (everything in the temp dir, no extra folder in archive)
echo -n "Creating tarball $BACKUP_FILENAME ... "
tar -czf "$BACKUP_FILENAME" -C "$BACKUP_TMPDIR" .
echo "Success!"

# Remove temp dir
rm -rf "$BACKUP_TMPDIR"

# Delete previous backup(s) (except the newly created one)
for file in "${OLD_BACKUP_FILES[@]}"; do
    if [[ "$file" != "$BACKUP_FILENAME" && -f "$file" ]]; then
        echo "Deleting previous backup: $file"
        rm -f "$file"
    fi
done

echo "Backup Complete! Archive is at $BACKUP_FILENAME"

Automation: Re-deploy docker container via Github workflow

Before explaining what is going on, this is my docker setup.

General setup:
Github action workflow -> Remote Server -> Build docker image if not present and run container.

Problem:
If the container goes down it can not come backup due to not having environment secrets which are kept at Github’s repository secrets.

Solution:
Bash script runs to check “docker ps” output. Filters the output and if conditions are met, triggers Github workflow dispatch and also emails admin about the outage. The workflow dispatch runs the starting cycle of my General setup listed above.

Example output of my “docker ps”

github@~ $ docker ps
CONTAINER ID   IMAGE                              COMMAND                   CREATED        STATUS                  PORTS                                             NAMES
b4bf4aa17f3e   btc2api:1.1.0                      "docker-entrypoint.s…"   21 hours ago   Up 21 hours (healthy)   0.0.0.0:4431->443/tcp, [::]:4431->443/tcp         btc2-api-110-api-1
5ab42117d482   ghcr.io/plankanban/planka:latest   "docker-entrypoint.s…"   10 days ago    Up 10 days (healthy)    0.0.0.0:3001->1337/tcp, [::]:3001->1337/tcp       planka_planka.1.pk0zp8xj90qqiazsowxm4aj4y
240cf6039329   mongo:8.0.9-noble                  "docker-entrypoint.s…"   10 days ago    Up 10 days (healthy)    0.0.0.0:27017->27017/tcp, [::]:27017->27017/tcp   mongodb_btc_mongodb.1.gvsz1v742huwxwbpzosm0adqf
1185a81b312e   postgres:16-alpine                 "docker-entrypoint.s…"   2 weeks ago    Up 2 weeks (healthy)    5432/tcp                                          planka_postgres.1.a81wcxtmfmnz4ajtqa072k92c

Bash script to check if btc2api is running at is in “(healthy)” state. I have Cron running this every minute.

#!/bin/bash

# Settings
IMAGE_KEYWORD="btc2api"
REPO="beetron/btc2_API"
TOKEN=""
WORKFLOW_FILE="deploy-v1.yml"

# Check if btc2api is running and healthy
if ! docker ps | grep "$IMAGE_KEYWORD" | grep -q "(healthy)"; then
  echo "btc2api was down at: $(date)" | s-nail -s "btc2api was down" admin@mail.com

  # Trigger GitHub Actions workflow_dispatch
  curl -X POST \
    -H "Accept: application/vnd.github+json" \
    -H "Authorization: Bearer $TOKEN" \
    "https://api.github.com/repos/$REPO/actions/workflows/$WORKFLOW_FILE/dispatches" \
    -d '{"ref":"main"}'

else
  echo "$IMAGE_KEYWORD is running and healthy"
fi

To get an idea of my deploy-v1.yml, you could check my api repo: https://github.com/beetron/btc2_API