diff --git a/docs/_config.yml b/docs/_config.yml index 917554c..3bccb4f 100644 --- a/docs/_config.yml +++ b/docs/_config.yml @@ -20,7 +20,7 @@ description: >- # this means to ignore newlines until "baseurl:" It supports local storage, AWS S3 or any S3 Alternatives for Object Storage, and SSH compatible storage. baseurl: "" # the subpath of your site, e.g. /blog -url: "jkaninda.github.io/mysql-bkup/" # the base hostname & protocol for your site, e.g. http://example.com +url: "" # the base hostname & protocol for your site, e.g. http://example.com twitter_username: jonaskaninda github_username: jkaninda diff --git a/docs/how-tos/azure-blob.md b/docs/how-tos/azure-blob.md index e3e5bcc..3ded41f 100644 --- a/docs/how-tos/azure-blob.md +++ b/docs/how-tos/azure-blob.md @@ -4,22 +4,43 @@ layout: default parent: How Tos nav_order: 5 --- -# Azure Blob storage -{: .note } -As described on local backup section, to change the storage of you backup and use Azure Blob as storage. You need to add `--storage azure` (-s azure). -You can also specify a folder where you want to save you data by adding `--path my-custom-path` flag. +# Backup to Azure Blob Storage +To store your backups on Azure Blob Storage, you can configure the backup process to use the `--storage azure` option. -## Backup to Azure Blob storage +This section explains how to set up and configure Azure Blob-based backups. -```yml +--- + +## Configuration Steps + +1. **Specify the Storage Type** + Add the `--storage azure` flag to your backup command. + +2. **Set the Blob Path** + Optionally, specify a custom folder within your Azure Blob container where backups will be stored using the `--path` flag. + Example: `--path my-custom-path`. + +3. **Required Environment Variables** + The following environment variables are mandatory for Azure Blob-based backups: + + - `AZURE_STORAGE_CONTAINER_NAME`: The name of the Azure Blob container where backups will be stored. + - `AZURE_STORAGE_ACCOUNT_NAME`: The name of your Azure Storage account. + - `AZURE_STORAGE_ACCOUNT_KEY`: The access key for your Azure Storage account. + +--- + +## Example Configuration + +Below is an example `docker-compose.yml` configuration for backing up to Azure Blob Storage: + +```yaml services: mysql-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/mysql-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/mysqlbkup/releases + # for available releases. image: jkaninda/mysql-bkup container_name: mysql-bkup command: backup --storage azure -d database --path my-custom-path @@ -29,16 +50,23 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## Azure Blob configurations + ## Azure Blob Configuration - AZURE_STORAGE_CONTAINER_NAME=backup-container - AZURE_STORAGE_ACCOUNT_NAME=account-name - AZURE_STORAGE_ACCOUNT_KEY=Ppby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw== - # mysql-bkup container must be connected to the same network with your database + + # Ensure the mysql-bkup container is connected to the same network as your database networks: - web + networks: web: ``` +--- +## Key Notes +- **Custom Path**: Use the `--path` flag to specify a folder within your Azure Blob container for organizing backups. +- **Security**: Ensure your `AZURE_STORAGE_ACCOUNT_KEY` is kept secure and not exposed in public repositories. +- **Compatibility**: This configuration works with Azure Blob Storage and other compatible storage solutions. diff --git a/docs/how-tos/backup-to-ftp.md b/docs/how-tos/backup-to-ftp.md index a1ff18a..a70a95c 100644 --- a/docs/how-tos/backup-to-ftp.md +++ b/docs/how-tos/backup-to-ftp.md @@ -4,41 +4,72 @@ layout: default parent: How Tos nav_order: 4 --- -# Backup to FTP remote server +# Backup to FTP Remote Server -As described for SSH backup section, to change the storage of your backup and use FTP Remote server as storage. You need to add `--storage ftp`. -You need to add the full remote path by adding `--path /home/jkaninda/backups` flag or using `REMOTE_PATH` environment variable. +To store your backups on an FTP remote server, you can configure the backup process to use the `--storage ftp` option. -{: .note } -These environment variables are required for SSH backup `FTP_HOST`, `FTP_USER`, `REMOTE_PATH`, `FTP_PORT` or `FTP_PASSWORD`. +This section explains how to set up and configure FTP-based backups. -```yml +--- + +## Configuration Steps + +1. **Specify the Storage Type** + Add the `--storage ftp` flag to your backup command. + +2. **Set the Remote Path** + Define the full remote path where backups will be stored using the `--path` flag or the `REMOTE_PATH` environment variable. + Example: `--path /home/jkaninda/backups`. + +3. **Required Environment Variables** + The following environment variables are mandatory for FTP-based backups: + + - `FTP_HOST`: The hostname or IP address of the FTP server. + - `FTP_PORT`: The FTP port (default is `21`). + - `FTP_USER`: The username for FTP authentication. + - `FTP_PASSWORD`: The password for FTP authentication. + - `REMOTE_PATH`: The directory on the FTP server where backups will be stored. + +--- + +## Example Configuration + +Below is an example `docker-compose.yml` configuration for backing up to an FTP remote server: + +```yaml services: mysql-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/mysql-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/mysql-bkup/releases + # for available releases. image: jkaninda/mysql-bkup container_name: mysql-bkup command: backup --storage ftp -d database environment: - DB_PORT=3306 - - DB_HOST=postgres + - DB_HOST=mysql - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## FTP config + ## FTP Configuration - FTP_HOST="hostname" - FTP_PORT=21 - FTP_USER=user - FTP_PASSWORD=password - REMOTE_PATH=/home/jkaninda/backups - # pg-bkup container must be connected to the same network with your database + # Ensure the mysql-bkup container is connected to the same network as your database networks: - web + networks: web: -``` \ No newline at end of file +``` + +--- + +## Key Notes + +- **Security**: FTP transmits data, including passwords, in plaintext. For better security, consider using SFTP (SSH File Transfer Protocol) or FTPS (FTP Secure) if supported by your server. +- **Remote Path**: Ensure the `REMOTE_PATH` directory exists on the FTP server and is writable by the specified `FTP_USER`. \ No newline at end of file diff --git a/docs/how-tos/backup-to-s3.md b/docs/how-tos/backup-to-s3.md index 5bb266b..93b4c9c 100644 --- a/docs/how-tos/backup-to-s3.md +++ b/docs/how-tos/backup-to-s3.md @@ -4,85 +4,123 @@ layout: default parent: How Tos nav_order: 2 --- -# Backup to AWS S3 +# Backup to AWS S3 -{: .note } -As described on local backup section, to change the storage of you backup and use S3 as storage. You need to add `--storage s3` (-s s3). -You can also specify a specify folder where you want to save you data by adding `--path /my-custom-path` flag. +To store your backups on AWS S3, you can configure the backup process to use the `--storage s3` option. This section explains how to set up and configure S3-based backups. +--- -## Backup to S3 +## Configuration Steps -```yml +1. **Specify the Storage Type** + Add the `--storage s3` flag to your backup command. + +2. **Set the S3 Path** + Optionally, specify a custom folder within your S3 bucket where backups will be stored using the `--path` flag. + Example: `--path /my-custom-path`. + +3. **Required Environment Variables** + The following environment variables are mandatory for S3-based backups: + + - `AWS_S3_ENDPOINT`: The S3 endpoint URL (e.g., `https://s3.amazonaws.com`). + - `AWS_S3_BUCKET_NAME`: The name of the S3 bucket where backups will be stored. + - `AWS_REGION`: The AWS region where the bucket is located (e.g., `us-west-2`). + - `AWS_ACCESS_KEY`: Your AWS access key. + - `AWS_SECRET_KEY`: Your AWS secret key. + - `AWS_DISABLE_SSL`: Set to `"true"` if using an S3 alternative like Minio without SSL (default is `"false"`). + - `AWS_FORCE_PATH_STYLE`: Set to `"true"` if using an S3 alternative like Minio (default is `"false"`). + +--- + +## Example Configuration + +Below is an example `docker-compose.yml` configuration for backing up to AWS S3: + +```yaml services: mysql-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/mysql-bkup/releases - # for a list of available releases. - image: jkaninda/mysql-bkup - container_name: mysql-bkup + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. + image: jkaninda/pg-bkup + container_name: pg-bkup command: backup --storage s3 -d database --path /my-custom-path environment: - - DB_PORT=3306 - - DB_HOST=mysql + - DB_PORT=5432 + - DB_HOST=postgres - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## AWS configurations + ## AWS Configuration - AWS_S3_ENDPOINT=https://s3.amazonaws.com - AWS_S3_BUCKET_NAME=backup - - AWS_REGION="us-west-2" + - AWS_REGION=us-west-2 - AWS_ACCESS_KEY=xxxx - AWS_SECRET_KEY=xxxxx - ## In case you are using S3 alternative such as Minio and your Minio instance is not secured, you change it to true + ## Optional: Disable SSL for S3 alternatives like Minio - AWS_DISABLE_SSL="false" - - AWS_FORCE_PATH_STYLE=true # true for S3 alternative such as Minio - - # mysql-bkup container must be connected to the same network with your database + ## Optional: Enable path-style access for S3 alternatives like Minio + - AWS_FORCE_PATH_STYLE=false + + # Ensure the mysql-bkup container is connected to the same network as your database networks: - web + networks: web: ``` -### Recurring backups to S3 +--- -As explained above, you need just to add AWS environment variables and specify the storage type `--storage s3`. -In case you need to use recurring backups, you can use `--cron-expression "0 1 * * *"` flag or `BACKUP_CRON_EXPRESSION=0 1 * * *` as described below. +## Recurring Backups to S3 -```yml +To schedule recurring backups to S3, use the `--cron-expression` flag or the `BACKUP_CRON_EXPRESSION` environment variable. This allows you to define a cron schedule for automated backups. + +### Example: Recurring Backup Configuration + +```yaml services: mysql-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/mysql-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/mysql-bkup/releases + # for available releases. image: jkaninda/mysql-bkup container_name: mysql-bkup - command: backup --storage s3 -d my-database --cron-expression "0 1 * * *" + command: backup --storage s3 -d database --cron-expression "0 1 * * *" environment: - DB_PORT=3306 - DB_HOST=mysql - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - ## AWS configurations + ## AWS Configuration - AWS_S3_ENDPOINT=https://s3.amazonaws.com - AWS_S3_BUCKET_NAME=backup - - AWS_REGION="us-west-2" + - AWS_REGION=us-west-2 - AWS_ACCESS_KEY=xxxx - AWS_SECRET_KEY=xxxxx - # - BACKUP_CRON_EXPRESSION=0 1 * * * # Optional - #Delete old backup created more than specified days ago + ## Optional: Define a cron schedule for recurring backups + #- BACKUP_CRON_EXPRESSION=0 1 * * * + ## Optional: Delete old backups after a specified number of days #- BACKUP_RETENTION_DAYS=7 - ## In case you are using S3 alternative such as Minio and your Minio instance is not secured, you change it to true + ## Optional: Disable SSL for S3 alternatives like Minio - AWS_DISABLE_SSL="false" - - AWS_FORCE_PATH_STYLE=true # true for S3 alternative such as Minio - # mysql-bkup container must be connected to the same network with your database + ## Optional: Enable path-style access for S3 alternatives like Minio + - AWS_FORCE_PATH_STYLE=false + + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` +--- + +## Key Notes + +- **Cron Expression**: Use the `--cron-expression` flag or `BACKUP_CRON_EXPRESSION` environment variable to define the backup schedule. For example, `0 1 * * *` runs the backup daily at 1:00 AM. +- **Backup Retention**: Optionally, use the `BACKUP_RETENTION_DAYS` environment variable to automatically delete backups older than a specified number of days. +- **S3 Alternatives**: If using an S3 alternative like Minio, set `AWS_DISABLE_SSL="true"` and `AWS_FORCE_PATH_STYLE="true"` as needed. + diff --git a/docs/how-tos/backup.md b/docs/how-tos/backup.md index 75e2bdf..e513ac0 100644 --- a/docs/how-tos/backup.md +++ b/docs/how-tos/backup.md @@ -5,28 +5,37 @@ parent: How Tos nav_order: 1 --- -# Backup database +# Backup Database -To backup the database, you need to add `backup` command. +To back up your database, use the `backup` command. + +This section explains how to configure and run backups, including recurring backups, using Docker or Kubernetes. + +--- + +## Default Configuration + +- **Storage**: By default, backups are stored locally in the `/backup` directory. +- **Compression**: Backups are compressed using `gzip` by default. Use the `--disable-compression` flag to disable compression. +- **Security**: It is recommended to create a dedicated user with read-only access for backup tasks. {: .note } -The default storage is local storage mounted to __/backup__. The backup is compressed by default using gzip. The flag __`disable-compression`__ can be used when you need to disable backup compression. +The backup process supports recurring backups on Docker or Docker Swarm. On Kubernetes, it can be deployed as a CronJob. -{: .warning } -Creating a user for backup tasks who has read-only access is recommended! +--- -The backup process can be run in scheduled mode for the recurring backups. -It handles __recurring__ backups of mysql database on Docker and can be deployed as __CronJob on Kubernetes__ using local, AWS S3 or SSH compatible storage. +## Example: Basic Backup Configuration -```yml +Below is an example `docker-compose.yml` configuration for backing up a database: + +```yaml services: - mysql-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/mysql-bkup/releases - # for a list of available releases. - image: jkaninda/mysql-bkup - container_name: mysql-bkup + pg-bkup: + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/pg-bkup/releases + # for available releases. + image: jkaninda/pg-bkup + container_name: pg-bkup command: backup -d database volumes: - ./backup:/backup @@ -36,36 +45,47 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - # mysql-bkup container must be connected to the same network with your database + + # Ensure the pg-bkup container is connected to the same network as your database networks: - web + networks: web: ``` -### Backup using Docker CLI +--- -```shell - docker run --rm --network your_network_name \ - -v $PWD/backup:/backup/ \ - -e "DB_HOST=dbhost" \ - -e "DB_USERNAME=username" \ - -e "DB_PASSWORD=password" \ - jkaninda/mysql-bkup backup -d database_name +## Backup Using Docker CLI + +You can also run backups directly using the Docker CLI: + +```bash +docker run --rm --network your_network_name \ + -v $PWD/backup:/backup/ \ + -e "DB_HOST=dbhost" \ + -e "DB_USERNAME=username" \ + -e "DB_PASSWORD=password" \ + jkaninda/pg-bkup backup -d database_name ``` -In case you need to use recurring backups, you can use `--cron-expression "0 1 * * *"` flag or `BACKUP_CRON_EXPRESSION=0 1 * * *` as described below. +--- -```yml +## Recurring Backups + +To schedule recurring backups, use the `--cron-expression` flag or the `BACKUP_CRON_EXPRESSION` environment variable. This allows you to define a cron schedule for automated backups. + +### Example: Recurring Backup Configuration + +```yaml services: mysql-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/mysql-bkup/releases - # for a list of available releases. + # In production, lock your image tag to a specific release version + # instead of using `latest`. Check https://github.com/jkaninda/mysql-bkup/releases + # for available releases. image: jkaninda/mysql-bkup container_name: mysql-bkup - command: backup -d database --cron-expression "0 1 * * *" + command: backup -d database --cron-expression @midnight volumes: - ./backup:/backup environment: @@ -74,13 +94,24 @@ services: - DB_NAME=database - DB_USERNAME=username - DB_PASSWORD=password - - BACKUP_CRON_EXPRESSION=0 1 * * * - #Delete old backup created more than specified days ago + ## Optional: Define a cron schedule for recurring backups + - BACKUP_CRON_EXPRESSION=@midnight + ## Optional: Delete old backups after a specified number of days #- BACKUP_RETENTION_DAYS=7 - # mysql-bkup container must be connected to the same network with your database + + # Ensure the mysql-bkup container is connected to the same network as your database networks: - web + networks: web: ``` +--- + +## Key Notes + +- **Cron Expression**: Use the `--cron-expression` flag or `BACKUP_CRON_EXPRESSION` environment variable to define the backup schedule. For example: + - `@midnight`: Runs the backup daily at midnight. + - `0 1 * * *`: Runs the backup daily at 1:00 AM. +- **Backup Retention**: Optionally, use the `BACKUP_RETENTION_DAYS` environment variable to automatically delete backups older than a specified number of days. diff --git a/docs/index.md b/docs/index.md index 37576d0..f3b6051 100644 --- a/docs/index.md +++ b/docs/index.md @@ -10,175 +10,76 @@ nav_order: 1 **MYSQL-BKUP** is a Docker container image designed to **backup, restore, and migrate MySQL databases**. It supports a variety of storage options and ensures data security through GPG encryption. -## Features +--- -- **Storage Options:** - - Local storage - - AWS S3 or any S3-compatible object storage - - FTP - - SSH-compatible storage - - Azure Blob storage +## Key Features -- **Data Security:** - - Backups can be encrypted using **GPG** to ensure confidentiality. +### Storage Options +- **Local storage** +- **AWS S3** or any S3-compatible object storage +- **FTP** +- **SFTP** +- **SSH-compatible storage** +- **Azure Blob storage** -- **Deployment Flexibility:** - - Available as the [jkaninda/mysql-bkup](https://hub.docker.com/r/jkaninda/mysql-bkup) Docker image. - - Deployable on **Docker**, **Docker Swarm**, and **Kubernetes**. - - Supports recurring backups of MySQL databases when deployed: - - On Docker for automated backup schedules. - - As a **Job** or **CronJob** on Kubernetes. +### Data Security +- Backups can be encrypted using **GPG** to ensure data confidentiality. -- **Notifications:** - - Get real-time updates on backup success or failure via: - - **Telegram** - - **Email** +### Deployment Flexibility +- Available as the [jkaninda/pg-bkup](https://hub.docker.com/r/jkaninda/mysql-bkup) Docker image. +- Deployable on **Docker**, **Docker Swarm**, and **Kubernetes**. +- Supports recurring backups of MySQL databases: + - On Docker for automated backup schedules. + - As a **Job** or **CronJob** on Kubernetes. + +### Notifications +- Receive real-time updates on backup success or failure via: + - **Telegram** + - **Email** + +--- ## Use Cases - **Automated Recurring Backups:** Schedule regular backups for MySQL databases. -- **Cross-Environment Migration:** Easily migrate your MySQL databases across different environments using supported storage options. +- **Cross-Environment Migration:** Easily migrate MySQL databases across different environments using supported storage options. - **Secure Backup Management:** Protect your data with GPG encryption. +--- +## Get Involved + +We welcome contributions! Feel free to give us a ⭐, submit PRs, or open issues on our [GitHub repository](https://github.com/jkaninda/mysql-bkup). + +{: .fs-6 .fw-300 } + +--- {: .note } -Code and documentation for `v1` version on [this branch][v1-branch]. +Code and documentation for the `v1` version are available on [this branch][v1-branch]. [v1-branch]: https://github.com/jkaninda/mysql-bkup --- -## Quickstart +## Available Image Registries -### Simple backup using Docker CLI +The Docker image is published to both **Docker Hub** and the **GitHub Container Registry**. You can use either of the following: -To run a one time backup, bind your local volume to `/backup` in the container and run the `backup` command: - -```shell - docker run --rm --network your_network_name \ - -v $PWD/backup:/backup/ \ - -e "DB_HOST=dbhost" \ - -e "DB_USERNAME=username" \ - -e "DB_PASSWORD=password" \ - jkaninda/mysql-bkup backup -d database_name -``` - -Alternatively, pass a `--env-file` in order to use a full config as described below. - -```yaml - docker run --rm --network your_network_name \ - --env-file your-env-file \ - -v $PWD/backup:/backup/ \ - jkaninda/mysql-bkup backup -d database_name -``` - -### Simple backup in docker compose file - -```yaml -services: - mysql-bkup: - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/mysql-bkup/releases - # for a list of available releases. - image: jkaninda/mysql-bkup - container_name: mysql-bkup - command: backup - volumes: - - ./backup:/backup - environment: - - DB_PORT=3306 - - DB_HOST=mysql - - DB_NAME=foo - - DB_USERNAME=bar - - DB_PASSWORD=password - - TZ=Europe/Paris - # mysql-bkup container must be connected to the same network with your database - networks: - - web -networks: - web: -``` -### Docker recurring backup - -```shell - docker run --rm --network network_name \ - -v $PWD/backup:/backup/ \ - -e "DB_HOST=hostname" \ - -e "DB_USERNAME=user" \ - -e "DB_PASSWORD=password" \ - jkaninda/mysql-bkup backup -d dbName --cron-expression "@every 15m" #@midnight -``` -See: https://jkaninda.github.io/mysql-bkup/reference/#predefined-schedules - -## Kubernetes - -```yaml -apiVersion: batch/v1 -kind: Job -metadata: - name: backup-job -spec: - ttlSecondsAfterFinished: 100 - template: - spec: - containers: - - name: mysql-bkup - # In production, it is advised to lock your image tag to a proper - # release version instead of using `latest`. - # Check https://github.com/jkaninda/mysql-bkup/releases - # for a list of available releases. - image: jkaninda/mysql-bkup - command: - - /bin/sh - - -c - - backup -d dbname - resources: - limits: - memory: "128Mi" - cpu: "500m" - env: - - name: DB_HOST - value: "mysql" - - name: DB_USERNAME - value: "user" - - name: DB_PASSWORD - value: "password" - volumeMounts: - - mountPath: /backup - name: backup - volumes: - - name: backup - hostPath: - path: /home/toto/backup # directory location on host - type: Directory # this field is optional - restartPolicy: Never -``` - -## Available image registries - -This Docker image is published to both Docker Hub and the GitHub container registry. -Depending on your preferences and needs, you can reference both `jkaninda/mysql-bkup` as well as `ghcr.io/jkaninda/mysql-bkup`: - -``` +```bash docker pull jkaninda/mysql-bkup docker pull ghcr.io/jkaninda/mysql-bkup ``` -Documentation references Docker Hub, but all examples will work using ghcr.io just as well. +While the documentation references Docker Hub, all examples work seamlessly with `ghcr.io`. -## Supported Engines - -This image is developed and tested against the Docker CE engine and Kubernetes exclusively. -While it may work against different implementations, there are no guarantees about support for non-Docker engines. +--- ## References -We decided to publish this image as a simpler and more lightweight alternative because of the following requirements: +We created this image as a simpler and more lightweight alternative to existing solutions. Here’s why: -- The original image is based on `alpine` and requires additional tools, making it heavy. -- This image is written in Go. -- `arm64` and `arm/v7` architectures are supported. -- Docker in Swarm mode is supported. -- Kubernetes is supported. +- **Lightweight:** Written in Go, the image is optimized for performance and minimal resource usage. +- **Multi-Architecture Support:** Supports `arm64` and `arm/v7` architectures. +- **Docker Swarm Support:** Fully compatible with Docker in Swarm mode. +- **Kubernetes Support:** Designed to work seamlessly with Kubernetes. diff --git a/docs/quickstart/index.md b/docs/quickstart/index.md new file mode 100644 index 0000000..92393b8 --- /dev/null +++ b/docs/quickstart/index.md @@ -0,0 +1,138 @@ +--- +title: Quickstart +layout: home +nav_order: 2 +--- + +# Quickstart + +This guide provides quick examples for running backups using Docker CLI, Docker Compose, and Kubernetes. + +--- + +## Simple Backup Using Docker CLI + +To run a one-time backup, bind your local volume to `/backup` in the container and execute the `backup` command: + +```bash +docker run --rm --network your_network_name \ + -v $PWD/backup:/backup/ \ + -e "DB_HOST=dbhost" \ + -e "DB_USERNAME=username" \ + -e "DB_PASSWORD=password" \ + jkaninda/mysql-bkup backup -d database_name +``` + +### Using an Environment File + +Alternatively, you can use an `--env-file` to pass a full configuration: + +```bash +docker run --rm --network your_network_name \ + --env-file your-env-file \ + -v $PWD/backup:/backup/ \ + jkaninda/mysql-bkup backup -d database_name +``` + +--- + +## Simple Backup Using Docker Compose + +Below is an example `docker-compose.yml` configuration for running a backup: + +```yaml +services: + mysql-bkup: + # In production, lock the image tag to a specific release version. + # Check https://github.com/jkaninda/mysql-bkup/releases for available releases. + image: jkaninda/mysql-bkup + container_name: mysql-bkup + command: backup + volumes: + - ./backup:/backup + environment: + - DB_PORT=3306 + - DB_HOST=mysql + - DB_NAME=foo + - DB_USERNAME=bar + - DB_PASSWORD=password + - TZ=Europe/Paris + # Ensure the mysql-bkup container is connected to the same network as your database. + networks: + - web + +networks: + web: +``` + +--- + +## Recurring Backup with Docker + +To schedule recurring backups, use the `--cron-expression` flag: + +```bash +docker run --rm --network network_name \ + -v $PWD/backup:/backup/ \ + -e "DB_HOST=hostname" \ + -e "DB_USERNAME=user" \ + -e "DB_PASSWORD=password" \ + jkaninda/mysql-bkup backup -d dbName --cron-expression "@every 15m" +``` + +For predefined schedules, refer to the [documentation](https://jkaninda.github.io/mysql-bkup/reference/#predefined-schedules). + +--- + +## Backup Using Kubernetes + +Below is an example Kubernetes `Job` configuration for running a backup: + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: backup-job +spec: + ttlSecondsAfterFinished: 100 + template: + spec: + containers: + - name: mysql-bkup + # In production, lock the image tag to a specific release version. + # Check https://github.com/jkaninda/mysql-bkup/releases for available releases. + image: jkaninda/mysql-bkup + command: + - /bin/sh + - -c + - backup -d dbname + resources: + limits: + memory: "128Mi" + cpu: "500m" + env: + - name: DB_HOST + value: "mysql" + - name: DB_USERNAME + value: "postgres" + - name: DB_PASSWORD + value: "password" + volumeMounts: + - mountPath: /backup + name: backup + volumes: + - name: backup + hostPath: + path: /home/toto/backup # Directory location on the host + type: Directory # Optional field + restartPolicy: Never +``` + +--- + +## Key Notes + +- **Volume Binding**: Ensure the `/backup` directory is mounted to persist backup files. +- **Environment Variables**: Use environment variables or an `--env-file` to pass database credentials and other configurations. +- **Cron Expressions**: Use standard cron expressions or predefined schedules for recurring backups. +- **Kubernetes Jobs**: Use Kubernetes `Job` or `CronJob` for running backups in a Kubernetes cluster. \ No newline at end of file diff --git a/docs/reference/index.md b/docs/reference/index.md index 559f7a0..aa3c81d 100644 --- a/docs/reference/index.md +++ b/docs/reference/index.md @@ -1,139 +1,127 @@ --- title: Configuration Reference layout: default -nav_order: 2 +nav_order: 3 --- -# Configuration reference +# Configuration Reference -Backup, restore and migrate targets, schedule and retention are configured using environment variables or flags. +Backup, restore, and migration targets, schedules, and retention policies are configured using **environment variables** or **CLI flags**. - - - - -### CLI utility Usage - -| Options | Shorts | Usage | -|-----------------------|--------|----------------------------------------------------------------------------------------| -| mysql-bkup | bkup | CLI utility | -| backup | | Backup database operation | -| restore | | Restore database operation | -| migrate | | Migrate database from one instance to another one | -| --storage | -s | Storage. local or s3 (default: local) | -| --file | -f | File name for restoration | -| --path | | AWS S3 path without file name. eg: /custom_path or ssh remote path `/home/foo/backup` | -| --dbname | -d | Database name | -| --port | -p | Database port (default: 3306) | -| --disable-compression | | Disable database backup compression | -| --cron-expression | | Backup cron expression, eg: (* * * * *) or @daily | -| --help | -h | Print this help message and exit | -| --version | -V | Print version information and exit | - -## Environment variables - -| Name | Requirement | Description | -|------------------------------|---------------------------------------------------------------|-----------------------------------------------------------------| -| DB_PORT | Optional, default 3306 | Database port number | -| DB_HOST | Required | Database host | -| DB_NAME | Optional if it was provided from the -d flag | Database name | -| DB_USERNAME | Required | Database user name | -| DB_PASSWORD | Required | Database password | -| AWS_ACCESS_KEY | Optional, required for S3 storage | AWS S3 Access Key | -| AWS_SECRET_KEY | Optional, required for S3 storage | AWS S3 Secret Key | -| AWS_BUCKET_NAME | Optional, required for S3 storage | AWS S3 Bucket Name | -| AWS_BUCKET_NAME | Optional, required for S3 storage | AWS S3 Bucket Name | -| AWS_REGION | Optional, required for S3 storage | AWS Region | -| AWS_DISABLE_SSL | Optional, required for S3 storage | Disable SSL | -| AWS_FORCE_PATH_STYLE | Optional, required for S3 storage | Force path style | -| FILE_NAME | Optional if it was provided from the --file flag | Database file to restore (extensions: .sql, .sql.gz) | -| GPG_PASSPHRASE | Optional, required to encrypt and restore backup | GPG passphrase | -| GPG_PUBLIC_KEY | Optional, required to encrypt backup | GPG public key, used to encrypt backup (/config/public_key.asc) | -| BACKUP_CRON_EXPRESSION | Optional if it was provided from the `--cron-expression` flag | Backup cron expression for docker in scheduled mode | -| BACKUP_RETENTION_DAYS | Optional | Delete old backup created more than specified days ago | -| SSH_HOST | Optional, required for SSH storage | ssh remote hostname or ip | -| SSH_USER | Optional, required for SSH storage | ssh remote user | -| SSH_PASSWORD | Optional, required for SSH storage | ssh remote user's password | -| SSH_IDENTIFY_FILE | Optional, required for SSH storage | ssh remote user's private key | -| SSH_PORT | Optional, required for SSH storage | ssh remote server port | -| REMOTE_PATH | Optional, required for SSH or FTP storage | remote path (/home/toto/backup) | -| FTP_HOST | Optional, required for FTP storage | FTP host name | -| FTP_PORT | Optional, required for FTP storage | FTP server port number | -| FTP_USER | Optional, required for FTP storage | FTP user | -| FTP_PASSWORD | Optional, required for FTP storage | FTP user password | -| TARGET_DB_HOST | Optional, required for database migration | Target database host | -| TARGET_DB_PORT | Optional, required for database migration | Target database port | -| TARGET_DB_NAME | Optional, required for database migration | Target database name | -| TARGET_DB_USERNAME | Optional, required for database migration | Target database username | -| TARGET_DB_PASSWORD | Optional, required for database migration | Target database password | -| TG_TOKEN | Optional, required for Telegram notification | Telegram token (`BOT-ID:BOT-TOKEN`) | -| TG_CHAT_ID | Optional, required for Telegram notification | Telegram Chat ID | -| TZ | Optional | Time Zone | -| AZURE_STORAGE_CONTAINER_NAME | Optional, required for Azure Blob Storage storage | Azure storage container name | -| AZURE_STORAGE_ACCOUNT_NAME | Optional, required for Azure Blob Storage storage | Azure storage account name | -| AZURE_STORAGE_ACCOUNT_KEY | Optional, required for Azure Blob Storage storage | Azure storage account key | --- -## Run in Scheduled mode -This image can be run as CronJob in Kubernetes for a regular backup which makes deployment on Kubernetes easy as Kubernetes has CronJob resources. -For Docker, you need to run it in scheduled mode by adding `--cron-expression "* * * * *"` flag or by defining `BACKUP_CRON_EXPRESSION=0 1 * * *` environment variable. +## CLI Utility Usage -## Syntax of crontab (field description) +| Option | Short Flag | Description | +|-------------------------|------------|-------------------------------------------------------------------------------| +| `pg-bkup` | `bkup` | CLI utility for managing PostgreSQL backups. | +| `backup` | | Perform a backup operation. | +| `restore` | | Perform a restore operation. | +| `migrate` | | Migrate a database from one instance to another. | +| `--storage` | `-s` | Storage type (`local`, `s3`, `ssh`, etc.). Default: `local`. | +| `--file` | `-f` | File name for restoration. | +| `--path` | | Path for storage (e.g., `/custom_path` for S3 or `/home/foo/backup` for SSH). | +| `--config` | `-c` | Configuration file for multi database backup. (e.g: `/backup/config.yaml`). | +| `--dbname` | `-d` | Database name. | +| `--port` | `-p` | Database port. Default: `3306`. | +| `--disable-compression` | | Disable compression for database backups. | +| `--cron-expression` | `-e` | Cron expression for scheduled backups (e.g., `0 0 * * *` or `@daily`). | +| `--help` | `-h` | Display help message and exit. | +| `--version` | `-V` | Display version information and exit. | -The syntax is: +--- -- 1: Minute (0-59) -- 2: Hours (0-23) -- 3: Day (0-31) -- 4: Month (0-12 [12 == December]) -- 5: Day of the week(0-7 [7 or 0 == sunday]) +## Environment Variables -Easy to remember format: +| Name | Requirement | Description | +|--------------------------------|--------------------------------------|----------------------------------------------------------------------------| +| `DB_PORT` | Optional (default: `3306`) | Database port number. | +| `DB_HOST` | Required | Database host. | +| `DB_NAME` | Optional (if provided via `-d` flag) | Database name. | +| `DB_USERNAME` | Required | Database username. | +| `DB_PASSWORD` | Required | Database password. | +| `AWS_ACCESS_KEY` | Required for S3 storage | AWS S3 Access Key. | +| `AWS_SECRET_KEY` | Required for S3 storage | AWS S3 Secret Key. | +| `AWS_BUCKET_NAME` | Required for S3 storage | AWS S3 Bucket Name. | +| `AWS_REGION` | Required for S3 storage | AWS Region. | +| `AWS_DISABLE_SSL` | Optional | Disable SSL for S3 storage. | +| `AWS_FORCE_PATH_STYLE` | Optional | Force path-style access for S3 storage. | +| `FILE_NAME` | Optional (if provided via `--file`) | File name for restoration (e.g., `.sql`, `.sql.gz`). | +| `GPG_PASSPHRASE` | Optional | GPG passphrase for encrypting/decrypting backups. | +| `GPG_PUBLIC_KEY` | Optional | GPG public key for encrypting backups (e.g., `/config/public_key.asc`). | +| `BACKUP_CRON_EXPRESSION` | Optional (flag `-e`) | Cron expression for scheduled backups. | +| `BACKUP_RETENTION_DAYS` | Optional | Delete backups older than the specified number of days. | +| `BACKUP_CONFIG_FILE` | Optional (flag `-c`) | Configuration file for multi database backup. (e.g: `/backup/config.yaml`) | +| `SSH_HOST` | Required for SSH storage | SSH remote hostname or IP. | +| `SSH_USER` | Required for SSH storage | SSH remote username. | +| `SSH_PASSWORD` | Optional | SSH remote user's password. | +| `SSH_IDENTIFY_FILE` | Optional | SSH remote user's private key. | +| `SSH_PORT` | Optional (default: `22`) | SSH remote server port. | +| `REMOTE_PATH` | Required for SSH/FTP storage | Remote path (e.g., `/home/toto/backup`). | +| `FTP_HOST` | Required for FTP storage | FTP hostname. | +| `FTP_PORT` | Optional (default: `21`) | FTP server port. | +| `FTP_USER` | Required for FTP storage | FTP username. | +| `FTP_PASSWORD` | Required for FTP storage | FTP user password. | +| `TARGET_DB_HOST` | Required for migration | Target database host. | +| `TARGET_DB_PORT` | Optional (default: `5432`) | Target database port. | +| `TARGET_DB_NAME` | Required for migration | Target database name. | +| `TARGET_DB_USERNAME` | Required for migration | Target database username. | +| `TARGET_DB_PASSWORD` | Required for migration | Target database password. | +| `TARGET_DB_URL` | Optional | Target database URL in JDBC URI format. | +| `TG_TOKEN` | Required for Telegram notifications | Telegram token (`BOT-ID:BOT-TOKEN`). | +| `TG_CHAT_ID` | Required for Telegram notifications | Telegram Chat ID. | +| `TZ` | Optional | Time zone for scheduling. | +| `AZURE_STORAGE_CONTAINER_NAME` | Required for Azure Blob Storage | Azure storage container name. | +| `AZURE_STORAGE_ACCOUNT_NAME` | Required for Azure Blob Storage | Azure storage account name. | +| `AZURE_STORAGE_ACCOUNT_KEY` | Required for Azure Blob Storage | Azure storage account key. | + +--- + +## Scheduled Backups + +### Running in Scheduled Mode + +- **Docker**: Use the `--cron-expression` flag or the `BACKUP_CRON_EXPRESSION` environment variable to schedule backups. +- **Kubernetes**: Use a `CronJob` resource for scheduled backups. + +### Cron Syntax + +The cron syntax consists of five fields: ```conf -* * * * * command to be executed +* * * * * command ``` +| Field | Description | Values | +|---------------|------------------------------|----------------| +| Minute | Minute of the hour | `0-59` | +| Hour | Hour of the day | `0-23` | +| Day of Month | Day of the month | `1-31` | +| Month | Month of the year | `1-12` | +| Day of Week | Day of the week (0 = Sunday) | `0-7` | + +#### Examples + +- **Every 30 minutes**: `*/30 * * * *` +- **Every hour at minute 0**: `0 * * * *` +- **Every day at 1:00 AM**: `0 1 * * *` + +### Predefined Schedules + +| Entry | Description | Equivalent To | +|----------------------------|--------------------------------------------|---------------| +| `@yearly` (or `@annually`) | Run once a year, midnight, Jan. 1st | `0 0 1 1 *` | +| `@monthly` | Run once a month, midnight, first of month | `0 0 1 * *` | +| `@weekly` | Run once a week, midnight between Sat/Sun | `0 0 * * 0` | +| `@daily` (or `@midnight`) | Run once a day, midnight | `0 0 * * *` | +| `@hourly` | Run once an hour, beginning of hour | `0 * * * *` | + +### Intervals + +You can also schedule backups at fixed intervals using the format: + ```conf -- - - - - -| | | | | -| | | | ----- Day of week (0 - 7) (Sunday=0 or 7) -| | | ------- Month (1 - 12) -| | --------- Day of month (1 - 31) -| ----------- Hour (0 - 23) -------------- Minute (0 - 59) -``` - -> At every 30th minute - -```conf -*/30 * * * * -``` -> “At minute 0.” every hour -```conf -0 * * * * -``` - -> “At 01:00.” every day - -```conf -0 1 * * * -``` -## Predefined schedules -You may use one of several pre-defined schedules in place of a cron expression. - -| Entry | Description | Equivalent To | -|------------------------|--------------------------------------------|---------------| -| @yearly (or @annually) | Run once a year, midnight, Jan. 1st | 0 0 1 1 * | -| @monthly | Run once a month, midnight, first of month | 0 0 1 * * | -| @weekly | Run once a week, midnight between Sat/Sun | 0 0 * * 0 | -| @daily (or @midnight) | Run once a day, midnight | 0 0 * * * | -| @hourly | Run once an hour, beginning of hour | 0 * * * * | - -### Intervals -You may also schedule backup task at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: - @every -where "duration" is a string accepted by time. +``` -For example, "@every 1h30m10s" would indicate a schedule that activates after 1 hour, 30 minutes, 10 seconds, and then every interval after that. \ No newline at end of file +- Example: `@every 1h30m10s` runs the backup every 1 hour, 30 minutes, and 10 seconds.