Scroll to navigation

BARMAN-CLOUD-WAL-ARCHIVE(1) Barman BARMAN-CLOUD-WAL-ARCHIVE(1)

NAME

barman-cloud-wal-archive - Barman-cloud Commands

Synopsis

barman-cloud-wal-archive

[ { -V | --version } ]
[ --help ]
[ { -v | --verbose } ]
[ { -q | --quiet } ]
[ { -t | --test } ]
[ --cloud-provider { aws-s3 | azure-blob-storage | google-cloud-storage } ]
[ { { -z | --gzip } | { -j | --bzip2 } | --xz | --snappy | --zstd | --lz4 } ]
[ --tags [ TAGS ... ] ]
[ --history-tags [ HISTORY_TAGS ... ] ]
[ --endpoint-url ENDPOINT_URL ]
[ { -P | --aws-profile } AWS_PROFILE ]
[ --read-timeout READ_TIMEOUT ]
[ { -e | --encryption } ENCRYPTION ]
[ --sse-kms-key-id SSE_KMS_KEY_ID ]
[ --azure-credential { azure-cli | managed-identity } ]
[ --encryption-scope ENCRYPTION_SCOPE ]
[ --max-block-size MAX_BLOCK_SIZE ]
[ --max-concurrency MAX_CONCURRENCY ]
[ --max-single-put-size MAX_SINGLE_PUT_SIZE ]
[ --kms-key-name KMS_KEY_NAME ]
DESTINATION_URL SERVER_NAME [ WAL_PATH ]


Description

The barman-cloud-wal-archive command is designed to be used in the archive_command of a Postgres server to directly ship WAL files to cloud storage.

NOTE:

If you are using Python 2 or unsupported versions of Python 3, avoid using the compression options --gzip or --bzip2. The script cannot restore gzip-compressed WALs on Python < 3.2 or bzip2-compressed WALs on Python < 3.3.


This script enables the direct transfer of WAL files to cloud storage, bypassing the Barman server. Additionally, it can be utilized as a hook script for WAL archiving (pre_archive_retry_script).

NOTE:

For GCP, only authentication with GOOGLE_APPLICATION_CREDENTIALS env is supported.


Parameters

Name of the server that will have the WALs archived.
URL of the cloud destination, such as a bucket in AWS S3. For example: s3://bucket/path/to/folder.
The value of the '%p' keyword (according to archive_command).
Show version and exit.
show this help message and exit.
Increase output verbosity (e.g., -vv is more than -v).
Decrease output verbosity (e.g., -qq is less than -q).
Test cloud connectivity and exit.
The cloud provider to use as a storage backend.

Allowed options are:

  • aws-s3.
  • azure-blob-storage.
  • google-cloud-storage.

gzip-compress the WAL while uploading to the cloud (should not be used with python < 3.2).
bzip2-compress the WAL while uploading to the cloud (should not be used with python < 3.3).
xz-compress the WAL while uploading to the cloud (should not be used with python < 3.3).
snappy-compress the WAL while uploading to the cloud (requires optional python-snappy library).
zstd-compress the WAL while uploading to the cloud (requires optional zstandard library).
lz4-compress the WAL while uploading to the cloud (requires optional lz4 library).
Tags to be added to archived WAL files in cloud storage.
Tags to be added to archived history files in cloud storage.

Extra options for the AWS cloud provider

Override default S3 endpoint URL with the given one.
Profile name (e.g. INI section in AWS credentials file).
Profile name (e.g. INI section in AWS credentials file) - replaced by --aws-profile.
The time in seconds until a timeout is raised when waiting to read from a connection (defaults to 60 seconds).
The encryption algorithm used when storing the uploaded data in S3.

Allowed options:

  • AES256.
  • aws:kms.

The AWS KMS key ID that should be used for encrypting the uploaded data in S3. Can be specified using the key ID on its own or using the full ARN for the key. Only allowed if -e / --encryption is set to aws:kms.

Extra options for the Azure cloud provider

Optionally specify the type of credential to use when authenticating with Azure. If omitted then Azure Blob Storage credentials will be obtained from the environment and the default Azure authentication flow will be used for authenticating with all other Azure services. If no credentials can be found in the environment then the default Azure authentication flow will also be used for Azure Blob Storage.

Allowed options are:

  • azure-cli.
  • managed-identity.

The name of an encryption scope defined in the Azure Blob Storage service which is to be used to encrypt the data in Azure.
The chunk size to be used when uploading an object via the concurrent chunk method (default: 4MB).
The maximum number of chunks to be uploaded concurrently (default: 1).
Maximum size for which the Azure client will upload an object in a single request (default: 64MB). If this is set lower than the Postgres WAL segment size after any applied compression then the concurrent chunk upload method for WAL archiving will be used.

Extra options for GCP cloud provider

The name of the GCP KMS key which should be used for encrypting the uploaded data in GCS.

AUTHOR

EnterpriseDB

COPYRIGHT

© Copyright EnterpriseDB UK Limited 2011-2024

December 9, 2024 3.12