Plugins

The functionality of Bareos can be extended by plugins. Plugins exist for the different daemons (Director, Storage- and File-Daemon).

To use plugins, they must be enabled in the configuration (Plugin Directory and optionally Plugin Names).

If a Plugin Directory is specified, Plugin Names defines which plugins get loaded.

If Plugin Names is not defined, all plugins found in the Plugin Directory are loaded.

The program bpluginfo can be used to retrieve information about a specific plugin.

Python Plugins

A special case of the Bareos Plugins are the Python Plugins. The Python plugins create a connection between the Bareos Plugin API and the Python programming language. With the Python plugins, it is possible to implement Bareos Plugins by writing Python code.

For each daemon there exists a Daemon Python Plugin which is a plugin implementing the C API for Bareos plugins, see python-fd Plugin, python-sd Plugin and python-dir Plugin.

This Python plugin can be configured via the usual plugin configuration mechanism which python files to load. The python files then implement the plugin functionality.

An example for such Python Plugins is the VMware Python Plugin.

With Bareos Version >= 23, the support of Python version 2 (which is end-of-life since Jan 1 2020) was removed.

The following plugins exist:

Bareos Python plugins

Python Version

Python 3

Bareos File Daemon

python3-fd

Bareos Storage Daemon

python3-sd

Bareos Director

python3-dir

For implementation details see Python Plugin API.

Switching to Python 3

Switching to use the Python 3 plugin, the following needs to be changed:

  • Set Plugin Names = “python3” to make sure the Python3 plugin is loaded.

  • Adapt the Plugin setting in the fileset to use Python3: Plugin = “python3:module_name=…”

Recovering old backups

When doing backups, the plugin parameter string is stored into the backup stream. During restore, this string is used to determine the plugin that will handle this data.

To be able to restore backups created with Python plugins using the python3-fd plugin that were created using the python-fd plugin, the code determining the plugin that will handle the data also matches for the basename of the current available plugins without the last character.

So backups created with the python plugin (which uses Python 2) can be restored with the python3 plugin (which uses Python 3).

Warning

It is not possible to use the python plugin to restore backups created with the python3 plugin. Once switched, you need to stay on python3.

Director Plugins

python-dir Plugin

The python-dir (or python3-dir) plugin is intended to extend the functionality of the Bareos Director by Python code. Configuration:

The director plugins are configured in the Dir Plugin Options (Dir->Job) (or JobDefs resource). To load a Python plugin you need

instance

default is ’0’, you can leave this, as long as you only have 1 Director Python plugin. If you have more than 1, start with instance=0 and increment the instance for each plugin.

module_name

The file (or directory) name of your plugin (without the suffix .py)

module_path

Plugin path (optional, only required when using non default paths)

Plugin specific options can be added as key-value pairs, each pair separated by ’:’ key=value.

Single Director Python Plugin Example:

bareos-dir.conf: Single Python Plugin Loading Example
Director {
  # ...
  # Plugin directory
  Plugin Directory = /usr/lib64/bareos/plugins
  # Load the python plugin
  Plugin Names = "python3"
}

JobDefs {
  Name = "DefaultJob"
  Type = Backup
  # ...
  # Load the class based plugin with testoption=testparam
  Dir Plugin Options = "python3"
                       ":instance=0"
                       ":module_name=bareos-dir-class-plugins"
                       ":testoption=testparam"
  # ...
}

Multiple Python Plugin Loading Example:

bareos-dir.conf: Multiple Python Plugin Loading Example
Director {
  # ...
  # Plugin directory
  Plugin Directory = /usr/lib64/bareos/plugins
  # Load the python plugin
  Plugin Names = "python3"
}

JobDefs {
  Name = "DefaultJob"
  Type = Backup
  # ...
  # Load the class based plugin with testoption=testparam
  Dir Plugin Options = "python3"
                       ":instance=0"
                       ":module_name=bareos-dir-class-plugins"
                       ":testoption=testparam1"
  Dir Plugin Options = "python3"
                       ":instance=1"
                       ":module_name=bareos-dir-class-plugins"
                       ":testoption=testparam2"
  # ...
}

Write your own Python Plugin

The class-based approach lets you easily reuse code already defined in the Python base class, which ships with the bareos-director-python-plugin package.

Some plugin examples are available on https://github.com/bareos/bareos/tree/master/contrib/dir-plugins, e.g. the plugin bareos-dir-nsca-sender, that submits the results and performance data of a backup job directly to Icinga or Nagios using the NSCA protocol.

Storage Daemon Plugins

autoxflate-sd

This plugin is part of the bareos-storage package.

The autoxflate-sd plugin can inflate (decompress) and deflate (compress) the data being written to or read from a device. It can also do both.

../_images/autoxflate-functionblocks.png

Therefore the autoxflate plugin inserts a inflate and a deflate function block into the stream going to the device (called OUT) and coming from the device (called IN).

Each stream passes first the inflate function block, then the deflate function block.

The inflate blocks are controlled by the setting of the Auto Inflate (Sd->Device) directive.

The deflate blocks are controlled by the setting of the Auto Deflate (Sd->Device), Auto Deflate Algorithm (Sd->Device) and Auto Deflate Level (Sd->Device) directives.

The inflate blocks, if enabled, will uncompress data if it is compressed using the algorithm that was used during compression.

The deflate blocks, if enabled, will compress uncompressed data with the algorithm and level configured in the according directives.

The series connection of the inflate and deflate function blocks makes the plugin very flexible.

Scenarios where this plugin can be used are for example:

  • client computers with weak cpus can do backups without compression and let the sd do the compression when writing to disk

  • compressed backups can be recompressed to a different compression format (e.g. gzip → lzo) using migration jobs

  • client backups can be compressed with compression algorithms that the client itself does not support

Multi-core cpus will be utilized when using parallel jobs as the compression is done in each jobs’ thread.

When the autoxflate plugin is configured, it will write some status information into the joblog.

used compression algorithm
autodeflation: compressor on device FileStorage is FZ4H
configured inflation and deflation blocks
autoxflate-sd.c: FileStorage OUT:[SD->inflate=yes->deflate=yes->DEV] IN:[DEV->inflate=yes->deflate=yes->SD]
overall deflation/inflation ratio
autoxflate-sd.c: deflate ratio: 50.59%

Additional Auto XFlate On Replication (Sd->Storage) can be configured at the Storage resource.

python-sd Plugin

The python-sd plugin behaves similar to the python-dir Plugin.

scsicrypto-sd

This plugin is part of the bareos-storage-tape package.

General

LTO Hardware Encryption

Modern tape-drives, for example LTO (from LTO4 onwards) support hardware encryption. There are several ways of using encryption with these drives. The following three types of key management are available for encrypting drives. The transmission of the keys to the volumes is accomplished by either of the three:

  • A backup application that supports Application Managed Encryption (AME)

  • A tape library that supports Library Managed Encryption (LME)

  • A Key Management Appliance (KMA)

We added support for Application Managed Encryption (AME) scheme, where on labeling a crypto key is generated for a volume and when the volume is mounted, the crypto key is loaded. When finally the volume is unmounted, the key is cleared from the memory of the Tape Drive using the SCSI SPOUT command set.

If you have implemented Library Managed Encryption (LME) or a Key Management Appliance (KMA), there is no need to have support from Bareos on loading and clearing the encryption keys, as either the Library knows the per volume encryption keys itself, or it will ask the KMA for the encryption key when it needs it. For big installations you might consider using a KMA, but the Application Managed Encryption implemented in Bareos should also scale rather well and have a low overhead as the keys are only loaded and cleared when needed.

The scsicrypto-sd plugin

The scsicrypto-sd hooks into the unload, label read, label write and label verified events for loading and clearing the key. It checks whether it it needs to clear the drive by either using an internal state (if it loaded a key before) or by checking the state of a special option that first issues an encrytion status query. If there is a connection to the director and the volume information is not available, it will ask the director for the data on the currently loaded volume. If no connection is available, a cache will be used which should contain the most recently mounted volumes. If an encryption key is available, it will be loaded into the drive’s memory.

Changes in the director

The director has been extended with additional code for handling hardware data encryption. The extra keyword encrypt on the label of a volume will force the director to generate a new semi-random passphrase for the volume, which will be stored in the database as part of the media information.

A passphrase is always stored in the database base64-encoded. When a so called Key Encryption Key is set in the config of the director, the passphrase is first wrapped using RFC3394 key wrapping and then base64-encoded. By using key wrapping, the keys in the database are safe against people sniffing the info, as the data is still encrypted using the Key Encryption Key (which in essence is just an extra passphrase of the same length as the volume passphrases used).

When the storage daemon needs to mount the volume, it will ask the director for the volume information and that protocol is extended with the exchange of the base64-wrapped encryption key (passphrase). The storage daemon provides an extra config option in which it records the Key Encryption Key of the particular director, and as such can unwrap the key sent into the original passphrase.

As can be seen from the above info we don’t allow the user to enter a passphrase, but generate a semi-random passphrase using the openssl random functions (if available) and convert that into a readable ASCII stream of letters, numbers and most other characters, apart from the quotes and space etc. This will produce much stronger passphrases than when requesting the info from a user. As we store this information in the database, the user never has to enter these passphrases.

The volume label is written in unencrypted form to the volume, so we can always recognize a Bareos volume. When the key is loaded onto the drive, we set the decryption mode to mixed, so we can read both unencrypted and encrypted data from the volume. When no key or the wrong key has been loaded, the drive will give an IO error when trying to read the volume. For disaster recovery you can store the Key Encryption Key and the content of the wrapped encryption keys somewhere safe and the bscrypto tool together with the scsicrypto-sd plugin can be used to get access to your volumes, in case you ever lose your complete environment.

If you don’t want to use the scsicrypto-sd plugin when doing DR and you are only reading one volume, you can also set the crypto key using the bscrypto tool. Because we use the mixed decryption mode, in which you can read both encrypted and unencrypted data from a volume, you can set the right encryption key before reading the volume label.

If you need to read more than one volume, you better use the scsicrypto-sd plugin with tools like bscan/bextract, as the plugin will then auto-load the correct encryption key when it loads the volume, similarly to what the storage daemon does when performing backups and restores.

The volume label is unencrypted, so a volume can also be recognized by a non-encrypted installation, but it won’t be able to read the actual data from it. Using an encrypted volume label doesn’t add much security (there is no security-related info in the volume label anyhow) and it makes it harder to recognize either a labeled volume with encrypted data or an unlabeled new volume (both would return an IO-error on read of the label.)

Configuration of the scsicrypto-sd plugin

SCSI crypto setup

The initial setup of SCSI crypto looks something like this:

  • Generate a Key Encryption Key e.g.

    bscrypto -g -
    

For details see bscrypto.

Security Setup

Some security levels need to be increased for the storage daemon to be able to use the low level SCSI interface for setting and getting the encryption status on a tape device.

The following additional security is needed for the following operating systems:

Linux (SG_IO ioctl interface):

To perform the operations required for scsicrypto, the programs must either run as user root or the additional capability CAP_SYS_RAWIO+EP (see capabilities(7)) must be set. The Bareos Storage Daemon normally runs as user bareos. Running it as root is not recommended.

If bareos-sd does not have the appropriate capabilities, all other tape operations may still work correctly, but you will get “Unable to perform SG_IO ioctl” errors.

Note

Since Version >= 21.0.1 package installation and upgrade will check for the presence of .enable-cap_sys_rawio in your bareos config dir and will configure the required capabilities. If you want capabilities automatically set up during package install, you can just create /etc/bareos/.enable-cap_sys_rawio.

Before Version >= 21.0.1 it is mandatory to setup capabilities manually after each update (see below).

setcap binaries (recommended)

You can also set up the extra capability on bareos-sd, bcopy, bextract, bls, bscan, bscrypto, btape by running the following commands:

Set the setting with our helper

/usr/lib/bareos/scripts/bareos-config set_scsicrypto_capabilities

Set the setting manually

setcap cap_sys_rawio=ep /usr/sbin/bareos-sd
setcap cap_sys_rawio=ep /usr/sbin/bcopy
setcap cap_sys_rawio=ep /usr/sbin/bextract
setcap cap_sys_rawio=ep /usr/sbin/bls
setcap cap_sys_rawio=ep /usr/sbin/bscan
setcap cap_sys_rawio=ep /usr/sbin/bscrypto
setcap cap_sys_rawio=ep /usr/sbin/btape

Remove the setting with our helper

/usr/lib/bareos/scripts/bareos-config unset_scsicrypto_capabilities

Remove the setting manually

setcap -r /usr/sbin/bareos-sd
setcap -r /usr/sbin/bcopy
setcap -r /usr/sbin/bextract
setcap -r /usr/sbin/bls
setcap -r /usr/sbin/bscan
setcap -r /usr/sbin/bscrypto
setcap -r /usr/sbin/btape

Check the setting with our helper

/usr/lib/bareos/scripts/bareos-config check_scsicrypto_capabilities

Check the setting manually

getcap -v /usr/sbin/bareos-sd
getcap -v /usr/sbin/bcopy
getcap -v /usr/sbin/bextract
getcap -v /usr/sbin/bls
getcap -v /usr/sbin/bscan
getcap -v /usr/sbin/bscrypto
getcap -v /usr/sbin/btape

getcap and setcap are part of libcap-progs.

Warning

Adding capabilities like cap_sys_rawio to binaries can increase their abuse. We also recommend to restrict a bit more their ownership to root as owner and bareos as group, plus setting chmod to 0750. Doing so will restrict execution root and members of group bareos. All these steps are done for you by our helper.

systemd (not recommended)

To add the capabilities to bareos-sd.service you can add in file /etc/systemd/system/bareos-sd.d/override.conf a section containing the AmbientCapabilities=CAP_SYS_RAWIO line. The easiest way to create this file is to use the following instructions as root.

systemctl edit bareos-sd.service

Fill the file with the following content, then save and exit

### Editing /etc/systemd/system/bareos-storage.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file

[Service]
AmbientCapabilities=CAP_SYS_RAWIO

Reload systemd configuration and restart bareos-sd

systemctl daemon-reload

systemctl restart bareos-sd

systemctl status bareos-sd
   ● bareos-storage.service - Bareos Storage Daemon service
   Loaded: loaded (/lib/systemd/system/bareos-storage.service; enabled; vendor preset: enabled)
   Drop-In: /etc/systemd/system/bareos-storage.service.d
   └─override.conf
   Active: active (running) since Tue 2022-02-01 15:12:49 CET; 5s ago
   Docs: man:bareos-sd(8)
   Main PID: 11142 (bareos-sd)
   Tasks: 2 (limit: 2298)
   Memory: 1.1M
   CPU: 8ms
   CGroup: /system.slice/bareos-storage.service
   └─11142 /usr/sbin/bareos-sd -f

   systemd[1]: Started Bareos Storage Daemon service.

To check status of capabilities of the running daemon you can use the getpcaps followed by the pid of bareos-sd.

root:~# getpcaps 11142
11142: cap_sys_rawio=eip

Warning

As of systemd version 249 There’s no mechanism to pass restricted flag (+ep), so the result will always be full CAP_SYS_RAWIO (eip)

Solaris (USCSI ioctl interface):

The user running the storage daemon needs the following additional privileges:

  • PRIV_SYS_DEVICES (see privileges(5))

If you are running the storage daemon as another user than root (which has the PRIV_SYS_DEVICES privilege), you need to add it to the current set of privileges. This can be set up by setting this either as a project for the user, or as a set of extra privileges in the SMF definition starting the storage daemon. The SMF setup is the cleanest one.

For SMF make sure you have something like this in the instance block:

<method_context working_directory=":default"> <method_credential user="bareos" group="bareos" privileges="basic,sys_devices"/> </method_context>
Changes in bareos-sd configuration
Changes in bareos-dir configuration

Testing

Restart the Storage Daemon and the Director. After this you can label new volumes with the encrypt option, e.g.

label slots=1-5 barcodes encrypt

Disaster Recovery

For Disaster Recovery (DR) you need the following information:

  • Actual bareos-sd configuration files with config options enabled as described above, including, among others, a definition of a director with the Key Encryption Key used for creating the encryption keys of the volumes.

  • The actual keys used for the encryption of the volumes.

This data needs to be available as a so called crypto cache file which is used by the plugin when no connection to the director can be made to do a lookup (most likely on DR).

Most of the times the needed information, e.g. the bootstrap info, is available on recently written volumes and most of the time the encryption cache will contain the most recent data, so a recent copy of the bareos-sd.<portnr>.cryptoc file in the working directory is enough most of the time. You can also save the info from database in a safe place and use bscrypto to populate this info (VolumeName → EncryptKey) into the crypto cache file used by bextract and bscan. You can use bscrypto with the following flags to create a new or update an existing crypto cache file e.g.:

bscrypto -p /var/lib/bareos/bareos-sd.<portnr>.cryptoc
  • A valid BSR file containing the location of the last safe of the database makes recovery much easier. Adding a post script to the database save job could collect the needed info and make sure its stored somewhere safe.

  • Recover the database in the normal way e.g. for postgresql:

    bextract -D <director_name> -V <volname> /dev/nst0 /tmp -b bootstrap.bsr
    /usr/lib/bareos/scripts/create_bareos_database
    /usr/lib/bareos/scripts/grant_bareos_privileges
    psql bareos < /tmp/var/lib/bareos/bareos.sql
    

Or something similar (change paths to follow where you installed the software or where the package put it).

Note

As described at the beginning of this chapter, there are different types of key management, AME, LME and KMA. If the Library is set up for LME or KMA, it probably won’t allow our AME setup and the scsi-crypto plugin will fail to set/clear the encryption key. To be able to use AME you need to “Modify Encryption Method” and set it to something like “Application Managed”. If you decide to use LME or KMA you don’t have to bother with the whole setup of AME which may for big libraries be easier, although the overhead of using AME even for very big libraries should be minimal.

scsitapealert-sd

This plugin is part of the bareos-storage-tape package.

File Daemon Plugins

File Daemon plugins are configured by the Plugin directive of a File Set.

Warning

Currently the plugin command is being stored as part of the backup. The restore command in your directive should be flexible enough if things might change in future, otherwise you could run into trouble.

Apache Libcloud Plugin

The Libcloud plugin can be used to backup objects from cloud storages via the Simple Storage Service (S3) protocol. The plugin code is based on the work of Alexandre Bruyelles.

Status of Libcloud Plugin

The status of the Libcloud plugin is experimental. It can automatically recurse nested Buckets and backup all included Objects on a S3 storage. However, restore of objects cannot be done directly back to the storage. A restore will write these objects as files on a filesystem.

Requirements of Libcloud Plugin

To use the Apache Libcloud backend you need to have the Libcloud module available for Python 2.

The plugin needs several options to run properly, the plugin options in the fileset resource and an additional configuration file. Both is described below.

Installation of Libcloud Plugin

The installation is done by installing the package bareos-filedaemon-libcloud-python-plugin.

Configuration of Libcloud Plugin

/etc/bareos/bareos-dir.d/fileset/PluginTest.conf
FileSet {
  Name = "PluginTest"
  Description = "Test the Plugin functionality with a Python Plugin."
  Include {
    Options {
      Signature = XXH128
    }
    Plugin = "python"
             ":module_name=bareos-fd-libcloud"
             ":config_file=/etc/bareos/libcloud_config.ini"
             ":buckets_include=user_data"
             ":buckets_exclude=tmp"
  }
}

The plugin options, separated by a colon:

module_path

Path to the bareos modules (optional)

module_name=bareos-fd-libcloud

This is the name of the plugin module

config_file

The plugin needs additional parameters, this is the path to the config file (see below)

buckets_include

Comma-separated list of buckets to include in backup

buckets_exclude

Comma-separated list of buckets to exclude from backup

And the job as follows:

/etc/bareos/bareos-dir.d/job/testvm1_job.conf
Job {
   Name = "testlibcloud_job"
   JobDefs = "DefaultJob"
   FileSet = "PluginTest"
}

And the plugin config file as follows:

/etc/bareos/libcloud_config.ini
[host]
hostname=127.0.0.1
port=9000
tls=false
provider=S3

[credentials]
username=admin
password=admin

[misc]
nb_worker=20
queue_size=1000
prefetch_size=250*1024*1024
temporary_download_directory=/dev/shm/bareos_libcloud

Note

Do not use quotes in the above config file, it is processed by the Python ConfigParser module and the quotes would not be stripped from the string.

Mandatory Plugin Options:

These options in the config file are mandatory:

hostname

The hostname/ip address of the storage backend server

port

The portnumber for the backend server

tls

Use Transport encryption, if supported by the backend

provider

The provider string, ‘S3’ being the default if not specified

username

The username to use for backups

password

The password for the backup user

nb_worker

The number of worker processes who can preload data from objects simultaneously before they are given to the plugin process that does the backup

queue_size

The maximum size in numbers of objects of the internal communication queue between the processes

prefetch_size

The maximum object size in bytes that should be preloaded from the workers; objects larger than this size are loaded by the plugin process itself

temporary_download_directory

The local path where the worker processes put their temporarily downloaded files to; the filedaemon process needs read and write access to this path

Optional Plugin Options:

This option in the config file is optional:

fail_on_download_error

When this option is enabled, any error during a file download will fail the backup job. By default a warning will be issued and the next file will be backed up.

job_message_after_each_number_of_objects

When running a backup, put a jobmessage after each count of “job_message_after_number_of_objects” to the joblog or no message if parameter equals 0; default is 100.

bpipe Plugin

The bpipe plugin is a generic pipe program, that simply transmits the data from a specified program to Bareos for backup, and from Bareos to a specified program for restore. The purpose of the plugin is to provide an interface to any system program for backup and restore. That allows you, for example, to do database backups without a local dump. By using different command lines to bpipe, you can backup any kind of data (ASCII or binary) depending on the program called.

On Linux, the Bareos bpipe plugin is part of the bareos-filedaemon package and is therefore installed on any system running the filedaemon.

The bpipe plugin is so simple and flexible, you may call it the “Swiss Army Knife” of the current existing plugins for Bareos.

The bpipe plugin is specified in the Include section of your Job’s FileSet resource.

bpipe fileset
FileSet {
  Name = "MyFileSet"
  Include {
    Options {
      Signature = XXH128
      Compression = LZ4
    }
    Plugin = "bpipe"
             ":file=<filepath>"
             ":reader=<readprogram>"
             ":writer=<writeprogram>"
  }
}

The syntax and semantics of the Plugin directive require the first part of the string up to the colon to be the name of the plugin. Everything after the first colon is ignored by the File daemon but is passed to the plugin. Thus the plugin writer may define the meaning of the rest of the string as he wishes. The full syntax of the plugin directive as interpreted by the bpipe plugin is:

Since Bareos Version >= 20 the plugin string can be spread over multiple lines using quotes as shown above.

bpipe directive
Plugin = "<plugin>:file=<filepath>:reader=<readprogram>:writer=<writeprogram>"
plugin

is the name of the plugin with the trailing -fd.so stripped off, so in this case, we would put bpipe in the field.

filepath

specifies the namespace, which for bpipe is the pseudo path and filename under which the backup will be saved. This pseudo path and filename will be seen by the user in the restore file tree. For example, if the value is /MySQL/mydump.sql, the data backed up by the plugin will be put under that “pseudo” path and filename. You must be careful to choose a naming convention that is unique to avoid a conflict with a path and filename that actually exists on your system.

readprogram

for the bpipe plugin specifies the “reader” program that is called by the plugin during backup to read the data. bpipe will call this program by doing a popen on it.

writeprogram

for the bpipe plugin specifies the “writer” program that is called by the plugin during restore to write the data back to the filesystem. To simply create a file, containing the data of the backup, following command can by used on a Unix system:

writer=sh -c 'cat >/var/tmp/bpipe.data'

Please note that the two items above describing the “reader” and “writer”, these programs are “executed” by Bareos, which means there is no shell interpretation of any command line arguments you might use. If you want to use shell characters (redirection of input or output, …), then we recommend that you put your command or commands in a shell script and execute the script. In addition if you backup a file with reader program, when running the writer program during the restore, Bareos will not automatically create the path to the file. Either the path must exist, or you must explicitly do so with your command or in a shell script.

See the examples about Backup of a PostgreSQL Database and Backup of a MySQL Database.

GlusterFS Plugin

Opposite to the GFAPI Backend that is used to store data on a Gluster system, this plugin is intended to backup data from a Gluster system to other media. The package bareos-filedaemon-glusterfs-plugin (Version >= 15.2.0) contains an example configuration file, that must be adapted to your environment.

LDAP Plugin

This plugin is intended to backup (and restore) the contents of a LDAP server. It uses normal LDAP operation for this. The package bareos-filedaemon-ldap-python-plugin (Version >= 15.2.0) contains an example configuration file, that must be adapted to your environment.

Please note that the plugin was tested against an OpenLDAP server. Other LDAP servers may behave differently and there might be problems when backing up or restoring objects. Most notably, it will not be possible to restore objects to an Active Directory server.

On restore, if the object to be restored already exists on the LDAP server, it will be deleted first, then restored from the backup. This could cause problems if your LDAP server uses referential integrity (e.g. if a user object is restored, the LDAP server might remove the user from all groups when it is being deleted and recreated during the restore process).

MariaDB mariabackup Plugin

This plugin uses the tool mariabackup to make full and incremental backups of MariaDB databases. mariabackup is part of the standard mariadb installation.

Documentation of mariabackup is available online: https://mariadb.com/kb/en/mariabackup/.

It is stable since MariaDB 10.1.48

Prerequisites of mariabackup Plugin

The mariabackup binary needs to be installed on the Bareos File Daemon. refer to previous documentation link.

For authentication the .mycnf file of the user running the Bareos File Daemon is used. Before proceeding, make sure that mariabackup can connect to the database, create backups and is able to restore.

Installation of mariabackup Plugin

Make sure you have met the prerequisites, then install the package bareos-filedaemon-mariabackup-python-plugin.

Configuration of mariabackup Plugin

Activate your plugin directory in the Bareos File Daemon configuration. See File Daemon Plugins for more about plugins in general.

bareos-fd.d/client/myself.conf
Client {
  ...
  Plugin Directory = /usr/lib64/bareos/plugins
  Plugin Names = "python3"
}

Now include the plugin as command-plugin in the Fileset resource:

bareos-dir.d/fileset/mariadb.conf
FileSet {
    Name = "mariadb"
    Include  {
        Options {
            Signature = XXH128
        }
        #...
        Plugin = "python"
                 ":module_name=bareos-fd-mariabackup"
                 ":mycnf=/root/.my.cnf"
    }
}

The plugin will call mariabackup to create a backup stream of all databases in the xbstream format. This stream will be processed by Bareos. Full backups can be made for all table formats, while incremental backups are only supported for InnoDB tables. Incremental backups for other table formats will create a full backup.

You can append options to the plugin call as key=value pairs, separated by ’:’. The following options are available:

  • With mycnf you can make mariabackup use a special mycnf-file with login credentials.

  • dumpbinary lets you modify the default command mariabackup.

  • dumpoptions to modify the options for mariabackup. Default setting is: --backup --stream=xbstream --extra-lsndir=/tmp/individual_tempdir

  • restorecommand to modify the command for restore. Default setting is: mbstream -x -C

  • strictIncremental: By default (false), an incremental backup will create data even if the Log Sequence Number (LSN) was not increased since last backup. This is to ensure that eventual changes to MYISAM/ARIA/Rocks tables get into the backup. MYISAM/ARIA/Rocks does not support incremental backups, you will always get a full backup of these tables. If set to true, no data will be written into backup, if the LSN was not changed.

Restore with mariabackup Plugin

With the usual Bareos restore mechanism a file-hierarchy will be created on the restore client under the default restore location:

/tmp/bareos-restores/_mariabackup/

Each restore job gets an own sub-directory named by its jobid, because mariabackup expects an empty directory. In that sub-directory, a new directory is created for every backup job that was part of the Full-Incremental sequence.

The naming scheme is: fromLSN_toLSN_jobid

Example:

/tmp/bareos-restores/_mariabackup/656/
|-- 00000000000000000000_00000000000010129154_0000000604
|-- 00000000000010129154_00000000000010142295_0000000635
|-- 00000000000010142295_00000000000010201260_0000000708

This example shows the restore tree for restore job with ID 656. First sub-directory has all files from the first full backup job with ID 604. It starts at LSN 0 and goes until LSN 10129154.

Next line is the first incremental job with ID 635, starting at LSN 10129154 until 10142295. The third line is the 2nd incremental job with ID 708.

To further prepare the restored files, use the mariabackup --prepare command. Read https://mariadb.com/kb/en/incremental-backup-and-restore-with-mariabackup/ for more information.

Also our systemtest can serve as example see systemtests/tests/py2plug-fd-mariabackup/testrunner

Troubleshooting

If things don’t work as expected, make sure that

  • the Bareos File Daemon (FD) works in general, so that you can make simple file backups and restores.

  • the Bareos FD Python plugins work in general, try one of the shipped simple sample plugins.

  • mariabackup works as user root, MariaDB access needs to be configured properly.

MSSQL Plugin

See chapter Backup of MSSQL Databases with Bareos Plugin.

MySQL Plugin

See the chapters Percona XtraBackup Plugin and Backup of MySQL Databases using the Python MySQL plugin.

Percona XtraBackup Plugin

This plugin uses Perconas XtraBackup tool, to make full and incremental backups of MySQL databases.

The key features of XtraBackup are:

  • Incremental backups

  • Backups that complete quickly and reliably

  • Uninterrupted transaction processing during backups

  • Savings on disk space and network bandwidth

  • Higher uptime due to faster restore time

Incremental backups only work for INNODB tables, when using MYISAM, only full backups can be created.

Warning

In MariaDB 10.1 and later, mariabackup is the recommended backup method to use instead of Percona XtraBackup. As such we recommend to use the dedicated plugin for MariaDB.

Prerequisites of percona XtraBackup Plugin

Install the XtraBackup tool from Percona. Documentation and packages are available here: https://www.percona.com/mysql/software. The plugin was successfully tested with XtraBackup versions 2.3.5 and 2.4.4.

For authentication the .mycnf file of the user running the Bareos File Daemon is used. Before proceeding, make sure that XtraBackup can connect to the database and create backups.

Installation of percona XtraBackup Plugin

Make sure you have met the prerequisites, after that install the package bareos-filedaemon-percona_XtraBackup-python-plugin.

Configuration of percona XtraBackup Plugin

Activate your plugin directory in the Bareos File Daemon configuration. See File Daemon Plugins for more about plugins in general.

bareos-fd.d/client/myself.conf
Client {
  ...
  Plugin Directory = /usr/lib64/bareos/plugins
  Plugin Names = "python3"
}

Now include the plugin as command-plugin in the Fileset resource:

bareos-dir.d/fileset/mysql.conf
FileSet {
    Name = "mysql"
    Include  {
        Options {
            Signature = XXH128
        }
        #...
        Plugin = "python"
                 ":module_name=bareos-fd-percona-xtrabackup"
                 ":mycnf=/root/.my.cnf"
    }
}

If used this way, the plugin will call XtraBackup to create a backup of all databases in the xbstream format. This stream will be processed by Bareos. If job level is incremental, XtraBackup will perform an incremental backup since the last backup – for InnoDB tables. If you have MyISAM tables, you will get a full backup of those.

You can append options to the plugin call as key=value pairs, separated by ’:’. The following options are available:

  • With mycnf you can make XtraBackup use a special mycnf-file with login credentials.

  • dumpbinary lets you modify the default command XtraBackup.

  • dumpoptions to modify the options for XtraBackup. Default setting is: --backup --datadir=/var/lib/mysql/ --stream=xbstream --extra-lsndir=/tmp/individual_tempdir

  • restorecommand to modify the command for restore. Default setting is: xbstream -x -C

  • strictIncremental: By default (false), an incremental backup will create data, even if the Log Sequence Number (LSN) was not increased since last backup. This is to ensure, that eventual changes to MYISAM tables get into the backup. MYISAM does not support incremental backups, you will always get a full backup of these tables. If set to true, no data will be written into backup, if the LSN was not changed.

Restore with percona XtraBackup Plugin

With the usual Bareos restore mechanism a file-hierarchy will be created on the restore client under the default restore location:

/tmp/bareos-restores/_percona/

Each restore job gets an own subdirectory, because Percona expects an empty directory. In that subdirectory, a new directory is created for every backup job that was part of the Full-Incremental sequence.

The naming scheme is: fromLSN_toLSN_jobid

Example:

/tmp/bareos-restores/_percona/351/
|-- 00000000000000000000_00000000000010129154_0000000334
|-- 00000000000010129154_00000000000010142295_0000000335
|-- 00000000000010142295_00000000000010201260_0000000338

This example shows the restore tree for restore job with ID 351. First subdirectory has all files from the first full backup job with ID 334. It starts at LSN 0 and goes until LSN 10129154.

Next line is the first incremental job with ID 335, starting at LSN 10129154 until 10142295. The third line is the 2nd incremental job with ID 338.

To further prepare the restored files, use the XtraBackup --prepare command. For more information read https://docs.percona.com/percona-xtrabackup/2.4/backup_scenarios/incremental_backup.html.

Troubleshooting

If things don’t work as expected, make sure that

  • the Bareos File Daemon (FD) works in general, so that you can make simple file backups and restores.

  • the Bareos FD Python plugins work in general, try one of the shipped simple sample plugins.

  • XtraBackup works as user root, MySQL access needs to be configured properly.

PostgreSQL Plugin

The PostgreSQL plugin supports an online (Hot) backup of database files and database transaction logs (WAL). With online database and transaction logs, the backup plugin can perform Point-In-Time-Restore (PITR) up to a single selected transaction or date/time.

This plugin uses the standard API PostgreSQL backup routines based on pg_backup_start() and pg_backup_stop() functions in non-exclusive mode. (Before PostgreSQL 15 formally pg_start_backup() and pg_stop_backup())

The key features are:

  • Full and Incremental backups

  • Point in time recovery

  • Backups that complete quickly and reliably

  • Uninterrupted transaction processing during backups

  • Savings on disk space and network bandwidth

  • Higher uptime due to faster restore time

Concept

Please make sure to read the PostgreSQL documentation about the backup and restore process: https://www.postgresql.org/docs/current/continuous-archiving.html

This is just a short outline of the tasks performed by the plugin.

  1. Notify PostgreSQL that we want to start backup the database files using the SELECT pg_backup_start() statement

  2. Backup database files

  3. Detect if tablespace are in use. Backup the external locations of all tablespaces.

  4. Notify PostgreSQL when done with file backups using the SELECT pg_backup_stop() statement

  5. PostgreSQL will write Write-Ahead-Logfiles (WAL) into the WAL Archive directory. These transaction logs contain transactions done while the file backup proceeded

  6. Backup fresh created WAL files

  7. Add files required for a restore backup_label, recovery.signal/recovery.conf (for version <= 12) and tablespace_map (if tablespaces are in use) as virtual files to the backup.

hide footbox
participant "fd plugin" as plugin
database "postgresql cluster" as psql
collections "data directories" as datadir
participant "wal archive directory" as waldir
== backup starts ==
plugin -> psql: pg_backup_start()
psql -> psql: prepare database for online backup
psql -> datadir: sync db files
psql -> plugin: pg_backup_start() returns (LSN)
group Online Backup Mode
    psql -> waldir: write write-ahead-logfiles until pg_backup_stop()
    datadir -> plugin: Backup database files from disk
    plugin -> psql: pg_backup_stop()
    psql -> waldir: switch wal file
    psql -> plugin: pg_backup_stop() returns (LSN, label file, tablespacemap file)
end
waldir -> plugin: backup newly created WAL files
plugin -> plugin: add required files for restore as virtual files
== backup completed ==

Full Backup tasks performed by the plugin

Incremental backups will only have to backup WAL files created since last reference backup. The postgresql plugin calls pg_switch_wal() to make postgresql create a new WAL file, then all WAL files created since the previous backup are backed up. The plugin receives the postgresql major number, last backup stop time and last LSN from the previous backup and verifies those values. After the backup these values are stored again for the next backup.

participant "fd plugin" as plugin
database "postgresql cluster" as psql
participant "wal archive directory"  as waldir

== backup starts ==
plugin -> plugin: receive and verify **postgresql major version**, \n**last_backup_stop_time** and **last_lsn**\nfrom previous backup. \nIf inconsistency is detected, exit backup procedure with error.
plugin -> psql: call pg_switch_wal() to force the creation of a new wal file
psql -> waldir: write WAL file
waldir -> plugin: backup WAL files created since previous backup
plugin -> plugin: store **postgresql major version**, **last_backup_stop_time** and **last_lsn**
== backup completed ==

Incremental Backup tasks performed by the plugin

The restore basically works like this:

  1. Restore all files to the original PostgreSQL location

  2. Configure PostgreSQL for the recovery (see below)

  3. Start PostgreSQL

  4. PostgreSQL will restore the latest possible consistent point in time. You can manage to restore to any other point in in time available in the WAL files, please refer to the PostgreSQL documentation for more details.

actor Administrator as admin
participant "fd plugin" as plugin
database "postgresql cluster" as psql
collections "data directories" as datadir
participant "wal archive directory"  as waldir

== restore starts ==
group Prepare PostgreSQL for recovery
  admin -> psql: stop postgresql
  admin -> datadir: remove files and subdirs
end

group Restore files from Bareos via plugin
  plugin -> datadir: restore backed up files
  plugin -> waldir: restore backed up files
  plugin -> datadir: restore file **backup_label**
  plugin -> datadir: restore file **recovery.signal**
  plugin -> datadir: restore file **tablespace.map**
end

group Configure PostgreSQL for recovery
  admin -> psql: configure **restore_command** in **postgresql.conf**
end

group Database restore by PostgreSQL
  admin -> psql: start postgresql
  psql -> psql: read backup_label, recovery.signal, tablespace.map
  psql -> psql: recover database to the end of the WAL log
  psql -> psql: startup normal database operation
end

group Check PostgreSQL after recovery
  admin -> psql: verify recovery was successful
end
== restore completed ==

Recovery tasks

Warning

In order to make coherent backups, it is imperative that the same |postgresql| major version is used for full and incremental backups depending on each other.

Prerequisites for the PostgreSQL Plugin

This plugin is a Bareos Python 3 plugin. It requires PostgreSQL cluster version >= 10 and the Python module pg8000 >= 1.16 to be installed.

Since Version >= 21 the plugin was changed to the Python module pg8000, with a required minimum version of 1.16, instead of psycopg2 and using Python >= 3.6 is mandatory.

If a distribution provided pg8000 package exists and is the same or newer version, it can be used. Otherwise it must be installed using the command pip3 install pg8000.

The plugin must be installed on the same host where the PostgreSQL cluster runs, as files are backed up from the local filesystem.

Warning

You have to enable PostgreSQL WAL-Archiving. The process and the plugin depend on it.

As a minimum this requires that you create an WAL archive directory and matching settings in your PostgreSQL configuration file postgresql.conf.

In our examples we assume the WAL archive directory as /var/lib/pgsql/wal_archive/.

postgresql.conf
...
# wal_level default is replica
wal_level = replica
archive_mode = on
archive_command = 'test ! -f /var/lib/pgsql/wal_archive/%f && cp %p /var/lib/pgsql/wal_archive/%f'
...

Please refer to the PostgreSQL documentation for details.

Note

While the PostgreSQL plugin backups only the required files from the WAL archive directory, old files are not removed automatically.

Installation of the PostgreSQL Plugin

Make sure you have met the prerequisites, after that install the package bareos-filedaemon-postgresql-python-plugin.

Configuration of the PostgreSQL Plugin

Activate your plugin directory in the Bareos File Daemon configuration. See File Daemon Plugins for more about plugins in general.

bareos-fd.d/client/myself.conf
Client {
  ...
  Plugin Directory = /usr/lib64/bareos/plugins
  Plugin Names = "python3"
}

Now include the plugin as command-plugin in the fileset resource and define a job using this fileset:

bareos-dir.d/fileset/postgresql.conf
FileSet {
    Name = "postgresql"
    Include  {
        Options {
            Compression = LZ4
            Signature = XXH128
        }
        Plugin = "python"
                 ":module_name=bareos-fd-postgresql"
                 ":db_host=/run/postgresql/"
                 ":wal_archive_dir=/var/lib/pgsql/wal_archive/"
    }
}

You can append options to the plugin call as key=value pairs, separated by :. The following options are available:

wal_archive_dir

directory where PostgreSQL archives the WAL files as defined in your postgresql.conf with the archive_command directive. This is a mandatory option, there is no default set.

db_user

with this user the plugin will try to connect to the database. this role should be granted to access all pg_settings and backup functions in the cluster. Default: root

db_password

a optional password needed for the connection. Default: None

db_name

there needs to be a named database for the connection. Default: postgres

db_host

used to specify the host or the socket-directory when starting with a leading /

usually you will set it to /run/postgresql

Default: localhost

db_port

useful, if cluster is not listening default port. Default: 5432

ignore_subdirs

a list of comma separated directories below the data_directory, you want to exclude. Default: pgsql_tmp

Note

As recommended by upstream the content of the following sub-directories content will not be backup. But the sub-directory as directory will always be. pg_dynshmem, pg_notify, pg_serial, pg_snapshots, pg_stat_tmp, pg_subtrans, pg_wal

switch_wal

If set to true (default), the plugin will let PostgreSQL write a new wal file, if the current Log Sequence Number (LSN) is greater than the LSN from the previous job. This makes sure that all changes will be backed up. Default: true

switch_wal_timeout

Timeout in seconds to wait for WAL archiving after WAL switch: Default 60 seconds

role

Set the role used after login, before the first sql call. Default: None

start_fast

By default, the backup will start after a checkpoint which can take some time. If start_fast_start is true, pg_backup_start will be executed as quickly as possible. This enforces an immediate checkpoint which can cause a spike in I/O operations and slowing any concurrently executing queries. Default: False

stop_wait_wal_archive

Optional parameter of type boolean. It controls if the plugin will wait for the WAL archiving to be complete at the end of the backup. In the default case the plugin will wait. We don’t recommend to change the default here. Default: True

Note

The plugin is using the non-exclusive backup method. Several backups can run at the same time on the cluster: which allow different tools to backup the cluster simultaneously.

For Bareos we recommend to set Allow Duplicate Jobs (Dir->Job)= No to limit the number of job to only one at the same time.

Restore with the PostgreSQL Plugin

With the usual Bareos restore mechanism a file-hierarchy will be created on the restore client under the default restore location according to the options set:

  • <restore prefix>/<cluster_data_directory>/

  • <restore prefix>/<wal_archive_dir>/

This example describes how to restore to the latest possible consistent point in time. You can manage to restore to any other point in time available in the WAL files, please refer to the PostgreSQL documentation for more details.

PostgreSQL >= 12

Beginning with PostgreSQL >= 12 the configuration must be done in your PostgreSQL configuration file postgresql.conf:

postgresql.conf
...
restore_command = 'cp /var/lib/pgsql/wal_archive/%f %p'
...

Additionally a file named recovery.signal is created in your PostgreSQL datadir by the plugin. It contains as a comment the backup label jobid and the PostgreSQL major version used.

PostgreSQL < 12

For PostgreSQL < 12 you need to complete the recovery.conf in your PostgreSQL datadir. It contains as a comment the backup label jobid and the PostgreSQL major version used.

Example:

recovery.conf
restore_command = 'cp /var/lib/pgsql/wal_archive/%f %p'

Where /var/lib/pgsql/wal_archive/ is the wal_archive_dir directory.

Initiate the Recovery Process

Make sure that the user postgres is allowed to rename the recovery marker file (recovery.signal or recovery.conf), as the file needs to be renamed during the recovery process. Should be the case if restored by the plugin. You might have to adapt your SELINUX configuration for this.

Starting the PostgreSQL server shall now initiate the recovery process.

Warning

When restoring a cluster which uses tablespaces, the table space location (directory) needs to be empty before the restore. Also ensure that the restored links in data/pg_tblspc point to the restored tablespace data, by default the symlinks will point to the original location.

Warning

We highly advise after a cluster restore, to cleanup older wals than the new history, and trigger as soon as possible a new full backup.

Troubleshooting the PostgreSQL Plugin

If things don’t work as expected, make sure that

  • the Bareos File Daemon (FD) works in general, so that you can make simple file backups and restores

  • the Bareos FD Python plugins works in general, try one of the shipped simple sample plugins

  • check your PostgreSQL data directory for files backup_label recovery.signal tablespace_map. If they exists, the cluster has been restored, but has not been restarted yet.

  • make sure your dbuser can connect to the database dbname and is allowed to issue the following statements matching your PostgreSQL version:

    SELECT current_setting('server_version_num');
    SELECT current_setting('archive_mode');
    SELECT current_setting('archive_command');
    SELECT current_setting('data_directory');
    SELECT current_setting('log_directory');
    SELECT current_setting('config_file');
    SELECT current_setting('hba_file');
    SELECT current_setting('identity_file');
    SELECT current_setting('ssl_ca_file');
    SELECT current_setting('ssl_cert_file');
    SELECT current_setting('ssl_crl_file');
    SELECT current_setting('ssl_key_file');
    SELECT current_setting('ssl_dh_params_file');
    SELECT current_setting('ssl_crl_dir');
    
    -- Version >= 15
    SELECT pg_backup_start();
    SELECT pg_backup_stop();
    
    -- Version >=10 < 15
    SELECT pg_start_backup();
    SELECT pg_stop_backup();
    
    SELECT pg_current_wal_lsn();
    SELECT pg_switch_wal();
    

python-fd Plugin

The python-fd plugin behaves similar to the python-dir Plugin. Configuration is done in the FileSet Resource on the Bareos Director and in optional configuration files on the Bareos File Daemon.

Configuration

To load a Python plugin you need

module_name

The file (or directory) name of your plugin (without the suffix .py)

module_path

Plugin path (optional, only required when using non default paths)

Plugin specific options can be added as key-value pairs, each pair separated by ’:’ key=value.

Configuration Files

This plugin can handle additional configuration files for the python-based plugins it wraps.

When supplying one or both of the options defaults_file or overrides_file, the supplied value will be treated as a path relative to the Bareos File Daemon configuration. When using a single configuration file instead of a configuration directory, it will be relative to that file’s parent directory. All configuration files will be read on the Bareos File Daemon that executes the job.

Depending on how the file was loaded, the options will have different precedence. When loaded via defaults_file the options in the FileSet will override those from the file. When loaded via overrides_file the options from the file will override those in the FileSet and the ones loaded from a defaults_file. In other words: a defaults_file provides default values that you can override in your FileSet and an overrides_file provides mandatory values that always take precedence.

The configuration files should contain one key-value pair per line that will be used as if they were added to the Plugin (Dir->Fileset->Include) option. Empty lines or lines starting with semicolon (;), hash (#) or left square bracket ([) will be ignored. Long values can be split across multiple lines by marking the end-of-line with a backslash (\). Finally, whitespace around keys, values or continuation lines is discarded.

plugin_defaults.ini: python-fd example configuration
# this is a comment
; this is also a comment
[sections like this will also be ignored]

key=value

# no inline comments
key=value ; this is not a comment, but part of the value

# whitespace around the key will be ignored
 another_key = another_value

# whitespace around continuation lines will be ignored, too.
long_value_key = very-long-value-\
                 split-across-lines

# trailing whitespace of the continued line will be preserved.
multiline_whitespace = value1 \
                       value2     \
                       value3

Note

It is not possible to pass module_path or module_name using a configuration file. The python plugin will be loaded before the plugin options including the configuration files are handled.

Encoded option values

In some cases it is desireable to have option values in an encoded format. Every option passed to a Python plugin, can be encoded using the supplied script bareos_encode_string.py from the scripts directory. To use such an encoded value, the option-name must be suffixed with #enc so the plugin knows it needs to decode the value. Thus, if you previously configured api_key=secret_string you could now configure api_key#enc=b7f<4WprP2baH8KX8.

Python plugin types

We basically distinguish between command-plugin and option-plugins.

Command Plugins

Command plugins are used to replace or extend the FileSet definition in the File Section. If you have a command-plugin, you can use it like in this example:

bareos-dir.conf: Python FD command plugins
FileSet {
  Name = "mysql"
  Include {
    Options {
      Signature = XXH128
    }
    Plugin = "python3"
             ":module_path=/usr/lib/bareos/plugins"
             ":module_name=bareos-fd-mysql"
  }
}

This example uses the MySQL plugin to backup MySQL dumps.

Option Plugins

Option plugins are activated in the Options resource of a FileSet definition.

Example:

bareos-dir.d/fileset/option.conf: Python FD option plugins
FileSet {
  Name = "option"
  Include {
    Options {
      Signature = XXH128
      Plugin = "python3"
               ":module_path=/usr/lib/bareos/plugins"
               ":module_name=bareos_option_example"
    }
    File = "/etc"
    File = "/usr/lib/bareos/plugins"
  }
}

This plugin from https://github.com/bareos/bareos/tree/master/contrib/fd-plugins/bareos_option_example has a method that is called before and after each file that goes into the backup, it can be used as a template for whatever plugin wants to interact with files before or after backup.

VMware Plugin

The VMware Plugin can be used for agentless backups of virtual machines running on VMware vSphere. It makes use of CBT (Changed Block Tracking) to do space efficient full and incremental backups, see below for mandatory requirements.

The plugin consists of two parts. The first part is implemented in Python, it uses the vSphere Web Services API to create and remove snapshots, retrieve VM config metadata, recreate virtual machines and to query CBT data. The second part is the bareos_vadp_dumper which is implemented in C++. This binary uses the Virtual Disk Development Kit (VDDK) to retrieve the virtual disks from the Hypervisor hosts.

It is included in Bareos since Version >= 15.2.0.

Status

The Plugin can do full, differential and incremental backup and restore of VM disks.

Since Version >= 23.0.3 the NVRAM of VMs is backed up and restored when recreating a VM to ensure it can boot without issues even when EFI enabled.

Since Version >= 23.0.0 the performance is improved and the cleanup of snapshots is enhanced.

Since Version >= 22.0.0 it also backs up the VM configuration metadata so that it can recreate deleted VMs and then restore the VM disks.

Since Version >= 22.1.0 it is possible to backup and restore VMs which have 2 or more disks on different datastores. See below for related limitations.

Since Version >= 22.1.0 on backup the plugin will retry when taking the snapshot fails, this is configurable by using the options snapshot_retries and snapshot_retry_wait, see below for details.

Current limitations amongst others are:

Limitation - VMware Plugin: Normal VM disks can not be excluded from the backup.

It is not yet possible to exclude normal (dependent) VM disks from backups. However, independent disks are excluded implicitly because they are not affected by snapshots which are required for CBT based backup.

Limitation - VMware Plugin: Restore not possible on recreated VM when VM was created from template or OVA.

When creating a VM from template or OVA, the parameter ddb.adaptertype in the .vmdk file is changed from lsilogic to buslogic although SCSI adapter type is VMware Paravirtual or LSI Logic. Restore works to the same still existing VM, but when such a VM was completely removed so that the plugin recreates it via vSphere API, the newly created disk will have ddb.adaptertype set to lsilogic with different numbers of cylinders and heads which causes the restore to fail due to disk geometry mismatch. Currently there’s no known workaround.

Limitation - VMware Plugin: Restore to different vCenter Server is unsupported.

Restore to a different vCenter Server was not tested, it will probably not work, so it is currently unsupported.

Limitation - VMware Plugin: Incremental or differential backups of disk which migrated to different datastore is currently unsupported.

When a disk of a VM was migrated to a different datastore, the plugin currently cannot handle incremental and differential backups properly. In that case, the plugin will detect it and fail the job. The job log will include an error message saying that full level backup of this job is required. Migrations of disks to other datastores can happen either manually or automatically when storage DRS is enabled.

Limitation - VMware Plugin: Incremental or differential backups of removed and recreated disks or after CBT reset

When a disk of a VM was removed and then added again on the same datastore with the same name, or when a CBT reset happened, then the plugin will detect this and request the full level CBT information, and all allocated blocks of that disk will be backed up in an incremental or differential job as if it were a full level job. The restore of such a job will work, but it will take longer than necessary. The backup will terminate with a warning and a recommendation to run a full level job to optimize restore time.

Since Version >= 23.0.3 the plugin option fallback_to_full_cbt=no (see below) can be used to disable this and terminate the job with failure instead. This can be useful if it is desired to run a new full level job anyway.

Requirements

As the Plugin is based on the VMware vSphere Storage APIs for Data Protection, which requires at least a VMware vSphere Essentials License. It is tested against VMware vSphere Storage APIs for Data Protection of VMware 7.0.1. It does not work with standalone unlicensed VMware ESXi™.

Since Bareos Version >= 22.0.0 the plugin is using the Virtual Disk Development Kit (VDDK) 8.0.0, as of the VDDK 8.0 release notes, it should be compatible with vSphere 8 and the next major release (except new features) and backward compatible with vSphere 6.7 and 7, see VDDK release notes at https://developer.broadcom.com/sdks/vmware-virtual-disk-development-kit-vddk/8.0 for details.

This plugin requires the pyVmomi module version 7.0.2 or greater. Since Bareos Version >= 21.0.0 the package bareos-vmware-plugin no longer includes a dependency on a pyVmomi package, because some Linux distributions don’t provide current versions. Consequently, pyVmomi must be either installed by using pip install pyvmomi or by manually installing a distribution provided pyVmomi package.

Since Version >= 23.0.3 the plugin requires the modules requests and urllib3 for backup and restore of the NVRAM. There is no dependency in the package. In most Linux distributions, the packages providing these modules can be used. The minimum version of requests must be 2.20.0 and urllib3 must be at least 1.24.1.

Installation

Install the package bareos-vmware-plugin including its requirements by using an appropriate package management tool (eg. yum, zypper, apt)

Configuration

First add a user account in vCenter that has full privileges by assigning the account to an administrator role or by adding the account to a group that is assigned to an administrator role. While any user account with full privileges could be used, it is better practice to create a separate user account, so that the actions by this account logged in vSphere are clearly distinguishable. In the future a more detailed set of required role privileges may be defined.

When using the vCenter appliance with embedded SSO, a user account usually has the structure <username>@vsphere.local, it may be different when using Active Directory as SSO in vCenter. For the examples here, we will use bakadm@vsphere.local with the password Bak.Adm-1234.

For more details regarding users and permissions see vSphere documentation at https://docs.vmare.com

Make sure to add or enable the following settings in your Bareos File Daemon configuration:

bareos-fd.d/client/myself.conf
Client {
  ...
  Plugin Directory = /usr/lib/bareos/plugins
  Plugin Names = python3
  ...
}

Note: Depending on the platform, the Plugin Directory may also be /usr/lib64/bareos/plugins

To define the backup of a VM in Bareos, a job definition and a fileset resource must be added to the Bareos director configuration. In vCenter, VMs are usually organized in datacenters and folders. The following example shows how to configure the backup of the VM named websrv1 in the datacenter mydc1 folder webservers on the vCenter server vcenter.example.org:

bareos-dir.conf: VMware Plugin Job and FileSet definition
Job {
  Name = "vm-websrv1"
  JobDefs = "DefaultJob"
  FileSet = "vm-websrv1_fileset"
}

FileSet {
  Name = "vm-websrv1_fileset"

  Include {
    Options {
         Signature = XXH128
         Compression = LZ4
    }
    Plugin = "python"
             ":module_name=bareos-fd-vmware"
             ":dc=mydc1:folder=/webservers"
             ":vmname=websrv1"
             ":vcserver=vcenter.example.org"
             ":vcuser=bakadm@vsphere.local"
             ":vcpass=Bak.Adm-1234"
  }
}

For VMs defined in the root-folder, folder=/ must be specified in the Plugin definition.

Since Bareos Version >= 17.2.4 the module_path is without vmware_plugin directory. On upgrades you either adapt your configuration from

python:module_path for Bareos < 17.2.0
Plugin = "python"
         ":module_path=/usr/lib64/bareos/plugins/vmware_plugin"
         ":module_name=bareos-fd-vmware"
         ":..."

to

python:module_path for Bareos >= 17.2.0
Plugin = "python"
         ":module_path=/usr/lib64/bareos/plugins"
         ":module_name=bareos-fd-vmware"
         ":..."

or install the bareos-vmware-plugin-compat package which includes compatibility symbolic links.

Since Version >= 17.2.4: The Plugin is using the Virtual Disk Development Kit (VDDK), which requires to pass the thumbprint of the vCenter SSL Certificate since version 6.5. The thumbprint is the SHA1 checksum of the SSL Certificate. The thumbprint can be retrieved like this:

Example Retrieving vCenter SSL Certificate Thumbprint
echo -n | openssl s_client -connect vcenter.example.org:443 2>/dev/null | openssl x509 -noout -fingerprint -sha1  | tr -d ":"

The result would look like this:

Example Result Thumbprint
SHA1 Fingerprint=AABBCCDDEEFF11223344556677889900AABBCCDD

For additional security, there is a now plugin option vcthumbprint, that can optionally be added. It must be given without colons like in the following example:

bareos-dir.conf: VMware Plugin Options with vcthumbprint
    ...
    Plugin = "python"
             ":module_name=bareos-fd-vmware"
             ":dc=mydc1:folder=/webservers"
             ":vmname=websrv1"
             ":vcserver=vcenter.example.org"
             ":vcuser=bakadm@vsphere.local"
             ":vcpass=Bak.Adm-1234"
             ":vcthumbprint=AABBCCDDEEFF11223344556677889900AABBCCDD"
    ...

If the vcthumbprint option is used and the thumbprint on the server changes, for example by renewing or replacing the SSL certificate but not adapting the vcthumbprint parameter in the Bareos configuration, backup jobs will fail and the API will only return an “unknown” error. Since Version >= 22.1.0 the plugin will compare the configured thumbprint with the server thumbprint and emit an appropriate error message and advice to update vcthumbprint parameter.

For ease of use (but less secure) when the vcthumbprint is not given, the plugin will retrieve the thumbprint.

Also since Version >= 17.2.4 another optional plugin option has been added that can be used for trying to force a given transport method. Normally, when no transport method is given, VDDK will negotiate available transport methods and select the best one. For a description of transport methods, see

https://knowledge.broadcom.com/external/article?legacyId=2075984

When the plugin runs in a VMware virtual machine which has access to datastore where the virtual disks to be backed up reside, VDDK will use the hotadd transport method. On a physical server without SAN access, it will use the NBD transport method, hotadd transport is not available in this case.

To try forcing a given transport method, the plugin option transport can be used, for example

bareos-dir.conf: VMware Plugin options with transport
    ...
    Plugin = "python"
             ":module_name=bareos-fd-vmware"
             ":dc=mydc1"
             ":folder=/webservers"
             ":vmname=websrv1"
             ":vcserver=vcenter.example.org"
             ":vcuser=bakadm@vsphere.local"
             ":vcpass=Bak.Adm-1234"
             ":transport=nbdssl"
    ...

Note that the backup will fail when specifying a transport method that is not available.

Since Version >= 17.2.8 it is possible to use non-ascii characters and blanks in the configuration for folder and vmname. Also virtual disk file names or paths containing non-ascii characters are handled correctly now. For backing up VMs that are contained in vApps, it is now possible to use the vApp name like a folder component. For example, if we have the vApp named Test vApp in the folder /Test/Test Folder and the vApp contains the two VMs Test VM 01 and Test VM 02, then the configuration of the filesets should look like this:

bareos-dir.conf: VMware Plugin FileSet definition for vApp
FileSet {
  Name = "vApp_Test_vm_Test_VM_01_fileset"

  Include {
    Options {
         Signature = XXH128
         Compression = LZ4
    }
    Plugin = "python"
             ":module_name=bareos-fd-vmware"
             ":dc=mydc1"
             ":folder=/Test/Test Folder/Test vApp"
             ":vmname=Test VM 01"
             ":vcserver=vcenter.example.org"
             ":vcuser=bakadm@vsphere.local"
             ":vcpass=Bak.Adm-1234"
  }
}

FileSet {
  Name = "vApp_Test_vm_Test_VM_02_fileset"

  Include {
    Options {
         Signature = XXH128
         Compression = LZ4
    }
    Plugin = "python"
             ":module_name=bareos-fd-vmware"
             ":dc=mydc1"
             ":folder=/Test/Test Folder/Test vApp"
             ":vmname=Test VM 02"
             ":vcserver=vcenter.example.org"
             ":vcuser=bakadm@vsphere.local"
             ":vcpass=Bak.Adm-1234"
  }
}

However, it is important to know that it is not possible to use non-ascii characters as an argument for the Name of a job or fileset resource.

Since Version >= 20 it is optionally possible to use a configuration file on the system running the Bareos File Daemon. This can be useful to specify common plugin options instead of having to repeat them in every Fileset. Options which are specifed in the config file will override options from the Fileset, if the same option is given there, too. A warning will be issued in that case. Use the plugin option config_file to specify the config file name as in the following example:

bareos-dir.conf: VMware Plugin Job and FileSet definition with config_file
FileSet {
  Name = "vm-websrv1_fileset"

  Include {
    Options {
         Signature = XXH128
         Compression = LZ4
    }
    Plugin = "python"
             ":module_name=bareos-fd-vmware"
             ":dc=mydc1"
             ":folder=/webservers"
             ":vmname=websrv1"
             ":config_file=/etc/bareos/vmware-plugin.ini"
  }
}

And the config file as follows:

/etc/bareos/vmware-plugin.ini
[vmware_plugin_options]
vcserver=vcenter.example.org
vcuser=bakadm@vsphere.local
vcpass=Bak.Adm-1234

Note

Do not use quotes in the above config file, it is processed by the Python ConfigParser module and the quotes would not be stripped from the string.

Since Version >= 20 To allow backing up VMs which do not support quiesced snapshots, it is now possible to use the plugin option quiesce. By default quiescing when not explicitly using this option, quiescing is enabled to create backups that are as consistent as possible. When setting quiesce=no it is more likely to backup an inconsistent state. In this case, the backup job log will contain an appropriate warning and the the job termination will be Backup OK – with warnings.

Quiescing on Windows VMs also triggers VSS snapshot when VMware Tools are installed. If that fails for example on MS SQL Server, it can help to disable VSS application quiescing using the VMware Tools configuration by adding this to the VMware Tools config:

[vmbackup]
vss.disableAppQuiescing = true

For details see https://knowledge.broadcom.com/external/article?legacyId=2146204

For consistent backups of MS SQL server, please use the Bareos MSSQL Plugin.

Backup

Before running the first backup, CBT (Changed Block Tracking) must be enabled for the VMs to be backed up.

Since Version >= 22.1.1 the plugin will try to enable CBT automatically when the plugin option enable_cbt=yes is set (see below). Since Version >= 23.0.0 this option is set to yes by default.

As of https://knowledge.broadcom.com/external/article?legacyId=2075984 manually enabling CBT is currently not working properly. The API however works properly. To enable CBT use the Script vmware_cbt_tool.py, it is packaged in the bareos-vmware-plugin package:

usage of vmware_cbt_tool.py
user@host:~$ vmware_cbt_tool.py --help
usage: vmware_cbt_tool.py [-h] -s HOST [-o PORT] -u USER [-p PASSWORD] -d
                          DATACENTER [-f FOLDER] [-v VMNAME]
                          [--vm-uuid VM_UUID] [--enablecbt] [--disablecbt]
                          [--resetcbt] [--info] [--listall] [--sslverify]
                          [--dumpvmconfig]

Process args for enabling/disabling/resetting CBT

optional arguments:
  -h, --help            show this help message and exit
  -s HOST, --host HOST  Remote host to connect to
  -o PORT, --port PORT  Port to connect on
  -u USER, --user USER  User name to use when connecting to host
  -p PASSWORD, --password PASSWORD
                        Password to use when connecting to host
  -d DATACENTER, --datacenter DATACENTER
                        DataCenter Name
  -f FOLDER, --folder FOLDER
                        Folder Name (must start with /, use / for root folder
  -v VMNAME, --vmname VMNAME
                        Names of the Virtual Machines
  --vm-uuid VM_UUID     Instance UUIDs of the Virtual Machines
  --enablecbt           Enable CBT
  --disablecbt          Disable CBT
  --resetcbt            Reset CBT (disable, then enable)
  --info                Show information (CBT supported and enabled or
                        disabled)
  --listall             List all VMs in the given datacenter with UUID and
                        containing folder
  --sslverify           Force SSL certificate verification
  --dumpvmconfig        Dump VM config metadata to JSON file

Note

The options --vm-uuid and --listall have been added in version Version >= 17.2.8, the tool is also able now to process non-ascii character arguments for the --folder and --vmname arguments and vApp names can be used like folder name components.

With --listall all VMs in the given datacenter are reported in a tabular output including instance UUID and containing Folder/vApp name.

Without the option --sslverify also self-signed SSL certificates will be accepted, but a warning message will be emitted in this case.

The option --dumpvmconfig is helpful to debug issues with the transformation of VM config metadata for recreating virtual machines. The JSON file will be written to the current working directory when this option is used.

For the above configuration example, the command to enable CBT would be

Example using vmware_cbt_tool.py
user@host:~$ vmware_cbt_tool.py -s vcenter.example.org -u bakadm@vsphere.local -p Bak.Adm-1234 -d mydc1 -f /webservers -v websrv1 --enablecbt

Note: CBT does not work if the virtual hardware version is 6 or earlier.

After enabling CBT, Backup Jobs can be run or scheduled as usual, for example in bconsole:

run job=vm-websrv1 level=Full

Restore

For restoring to the same still exising VM from which the backup has been taken, the VM must be powered off and no snapshot must exist. In bconsole use the restore menu 5, select the correct FileSet and enter mark *, then done. After restore has finished, the VM will be set to its previous powerstate. So if it was powered on at backup time, it will be powered on after restore. This can be changed by using the plugin option restore_powerstate (see below).

Since Version >= 22.0.0 the plugin will recreate the VM if it does not exist. By passing plugin options, with this version it is also possible to recreate the VM in a different folder, datacenter, host, cluster, resource pool or datastore, see below for details. The MAC address and the UUID of the VM will be restored, too. Restoring to a different VM location, eg, by passing a different folder, will create a new VM even if the VM which was backed up still exists. In this case, the new VM will get a new generated MAC address and UUID.

Note

When restoring a VM to a different location while the backed up VM still exists and a static IP is configured within the VM: To avoid IP address conflicts, make sure to also add the plugin option restore_powerstate=off and disable or change the network adapter configuration of the VM before powering it on.

To restore to a different folder, datacenter, host, cluster, resource pool or datastore, the corresponding plugin options must be passed. All plugin options which have been effective at backup time will be passed on restore and each individual option can be overridden by passing an options string at restore time. For example, to restore to a different VM name and different datastore, pass the following plugin option string:

Example restore plugin options string
python:datastore=datastore2:vmname=testvm1restored

All other plugin options which are not passed explicitily on restore will be the same as at backup time.

Note that most plugin options are used for backup and restore, but there are some which can be only used on restore, for example to prevent from powering on the VM automatically after restore if it was powered on at backup time, use this plugin options string:

Example restore plugin options string with powerstate
python:restore_datastore=datastore2:vmname=testvm1restored:restore_powerstate=off

See below for a complete restore example and description of all plugin options.

Restore using Bareos WebUI

Since Version >= 22.0.0 it is possible to use the Bareos WebUI to restore VMware Plugin jobs.

When using the WebUI to restore a VMware Plugin job, it is important to set Merge all client file sets to no and Merge all jobs up to the last full backup together to yes. In the File selection all files must be selected. Only restoring selected virtual disks will probably not work and is currently unsupported. The Bareos WebUI will detect if a plugin based jobs is restore and will then show an additional Plugin options field, here a plugin options string starting with python: as described above can be entered.

../_images/bareos-webui-restore-with-pluginoptions.png

Restore to local VMDK File

Since Version >= 15.2.3 it is possible to restore to local VMDK files. That means, instead of directly restoring a disk that belongs to the VM, the restore creates VMDK disk image files on the filesystem of the system that runs the Bareos File Daemon. As the VM that the backup was taken from is not affected by this, it can remain switched on while restoring to local VMDK. Such a restored VMDK file can then be uploaded to a VMware vSphere datastore or accessed by tools like guestfish to extract single files.

For restoring to local VMDK, the plugin option localvmdk=yes must be passed. The following example shows how to perform such a restore using bconsole:

Example restore to local VMDK
*restore
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"

First you select one or more JobIds that contain files
to be restored. You will be presented several methods
of specifying the JobIds. Then you will be allowed to
select which files from those JobIds are to be restored.

To select the JobIds, you have the following choices:
     1: List last 20 Jobs run
     ...
     5: Select the most recent backup for a client
     ...
    13: Cancel
Select item:  (1-13): 5
Automatically selected Client: vmw5-bareos-centos6-64-devel-fd
The defined FileSet resources are:
     1: Catalog
     ...
     5: PyTestSetVmware-test02
     6: PyTestSetVmware-test03
     ...
Select FileSet resource (1-10): 5
+-------+-------+----------+---------------+---------------------+------------------+
| jobid | level | jobfiles | jobbytes      | starttime           | volumename       |
+-------+-------+----------+---------------+---------------------+------------------+
|   625 | F     |        4 | 4,733,002,754 | 2016-02-18 10:32:03 | Full-0067        |
...
You have selected the following JobIds: 625,626,631,632,635

Building directory tree for JobId(s) 625,626,631,632,635 ...
10 files inserted into the tree.

You are now entering file selection mode where you add (mark) and
remove (unmark) files to be restored. No files are initially added, unless
you used the "all" keyword on the command line.
Enter "done" to leave this mode.

cwd is: /
$ mark *
10 files marked.
$ done
Bootstrap records written to /var/lib/bareos/vmw5-bareos-centos6-64-devel-dir.restore.1.bsr

The job will require the following
   Volume(s)                 Storage(s)                SD Device(s)
===========================================================================

    Full-0001                 File                      FileStorage
    ...
    Incremental-0078          File                      FileStorage

Volumes marked with "*" are online.

10 files selected to be restored.

Using Catalog "MyCatalog"
Run Restore job
JobName:         RestoreFiles
Bootstrap:       /var/lib/bareos/vmw5-bareos-centos6-64-devel-dir.restore.1.bsr
Where:           /tmp/bareos-restores
Replace:         Always
FileSet:         Linux All
Backup Client:   vmw5-bareos-centos6-64-devel-fd
Restore Client:  vmw5-bareos-centos6-64-devel-fd
Format:          Native
Storage:         File
When:            2016-02-25 15:06:48
Catalog:         MyCatalog
Priority:        10
Plugin Options:  *None*
OK to run? (yes/mod/no): mod
Parameters to modify:
     1: Level
     ...
    14: Plugin Options
Select parameter to modify (1-14): 14
Please enter Plugin Options string: python:localvmdk=yes
Run Restore job
JobName:         RestoreFiles
Bootstrap:       /var/lib/bareos/vmw5-bareos-centos6-64-devel-dir.restore.1.bsr
Where:           /tmp/bareos-restores
Replace:         Always
FileSet:         Linux All
Backup Client:   vmw5-bareos-centos6-64-devel-fd
Restore Client:  vmw5-bareos-centos6-64-devel-fd
Format:          Native
Storage:         File
When:            2016-02-25 15:06:48
Catalog:         MyCatalog
Priority:        10
Plugin Options:  python: module_path=/usr/lib64/bareos/plugins:module_name=bareos-fd-vmware: dc=dass5:folder=/: vmname=stephand-test02: vcserver=virtualcenter5.dass-it:vcuser=bakadm@vsphere.local: vcpass=Bak.Adm-1234: localvmdk=yes
OK to run? (yes/mod/no): yes
Job queued. JobId=639

Note: Since Bareos Version >= 15.2.3 it is sufficient to add Python plugin options, e.g. by

python:localvmdk=yes

Before, all Python plugin must be repeated and the additional be added, like:

"python:module_name=bareos-fd-vmware:dc=dass5:folder=/:vmname=stephand-test02:vcserver=virtualcenter5.dass-it:vcuser=bakadm@vsphere.local:vcpass=Bak.Adm-1234:localvmdk=yes"

After the restore process has finished, the restored VMDK files can be found under path{/tmp/bareos-restores/}:

Example result of restore to local VMDK
$ <input>ls -laR /tmp/bareos-restores</input>
/tmp/bareos-restores:
total 28
drwxr-x--x.  3 root root  4096 Feb 25 15:47 .
drwxrwxrwt. 17 root root 20480 Feb 25 15:44 ..
drwxr-xr-x.  2 root root  4096 Feb 25 15:19 [ESX5-PS100] stephand-test02

$ <input>ls -la "/tmp/bareos-restores/[ESX5-PS100] stephand-test02"</input>
/tmp/bareos-restores/[ESX5-PS100] stephand-test02:
total 7898292
drwxr-xr-x. 2 root root       4096 Feb 25 15:19 .
drwxr-x--x. 3 root root       4096 Feb 25 15:47 ..
-rw-------. 1 root root 2075197440 Feb 25 15:19 stephand-test02_1.vmdk
-rw-------. 1 root root 6012731392 Feb 25 15:19 stephand-test02.vmdk

Description of all Plugin Options

Note that all plugin options that have been used at backup time, are passed on restore. The VM metadata is saved in restoreobjects both in the catalog DB and volume, it is used on restore if the VM must be recreated. Most options are used for both backup and restore, some which can be only used for restore start with restore_. Where nothing special is mentioned regarding restore, it is normally not necessary or useful to override that option on restore.

vcserver (mandatory on backup)

Hostname (FQDN) or IP address of vCenter server. Restore to different vCenter Server is unsupported.

vcuser (mandatory on backup)

Username for API access to vCenter, eg. administrator@vsphere.local

vcpass (mandatory on backup)

Password for API access to vCenter

dc (mandatory on backup)

Datacenter name. This can be optionally passed on restore to recreate the VM in a different datacenter.

folder (mandatory on backup)

The VM folder in which the VM to be backed up resides. This must be given like a UNIX path with / as separator. On restore if defined will recreate the VM in a different folder. The given folder must exist before starting the restore.

vmname (mandatory on backup)

The name of the VM to be backed up. On restore it is possible to override this option in order to recreate the VM with a different name.

vcthumbprint (optional)

Thumbprint of the vCenter SSL Certificate, which is the SHA1 checksum of the SSL Certificate

transport (optional)

Normally the transport mode mode will be autonegotiated, eg. if the system that runs this plugin is a VM has storage access to the datastore of the VM that’s being backed up, the hotadd transport will be used. Otherwise the nbd or nbdssl transport. For details about transport modes see the VDDK documentation. This option can be used to force the given transport mode.

log_path (optional)

The default log path is /var/log/bareos/. A different path can be specified using this option, it will be used for bareos_vadp_dumper log files.

localvmdk (optional)

Restore to local .vmdk file(s) instead of restore to VM. Default is no.

vadp_dumper_verbose (optional)

When setting vadp_dumper_verbose=yes, the option -v will be added when running bareos_vadp_dumper. This can be helpful for debugging purposes.

verifyssl (optional)

By default the validity of SSL certificates will be verified. By setting verifyssl=no this can be disabled.

quiesce (optional)

The backed up VM will be triggered to quiesce its filesystems before creating a snapshot by default to increase the data consistency. This can fail or take very long on a VM which runs heavy I/O workload. When setting quiesce=no the quiescing will be skipped, but the snapshot may be inconsistent. It is not recommended to use this option, instead try stop heavy I/O load before snapshot. This could be possible by running pre-freeze and post-thaw actions which can be configured in VMware tools, see VMware documentation for details.

cleanup_tmpfiles (optional)

By default, temporary files created by the plugin will be cleaned up after backup or restore. When setting cleanup_tmpfiles=no they will be left over, this can be helpful for debugging purposes. Since Version >= 22.0.0

restore_esxhost (optional)

By default, if a VM to be restored does not exist, it will be recreated on the same host that it has been running on at backup time. Use this option to restore on the given ESX host. Since Version >= 22.0.0

restore_cluster (optional)

Instead of specifying restore_esxhost, it is also possible to specify a cluster name using this option, the ESX host will be autoselected in that case, if DRS is configured properly. Since Version >= 22.0.0

restore_datastore (optional)

By default, if a VM to be restored does not exist, it will be recreated in the same datastore where it was stored at backup time. Use this option to restore on the given datastore. Since Version >= 22.0.0. As Version >= 22.1.0 it is possible to backup and restore VMs with disks on multiple datastores, when using this option, it will only change the datastore of the disks which were stored in the same datastore as the VM, the other disks will be recreated on the same datastore they were backed up from.

restore_resourcepool (optional)

By default, if a VM to be restored does not exist, it will be recreated in the same resource pool which it has in been in at backup time. Using this option allows to override this and specify a different resource pool. Since Version >= 22.0.0

restore_powerstate (optional)

By default, after restore a VM will be set to its previous powerstate which means the powerstate at backup time. When specifying restore_powerstate=off the VM will stay powered off after restore. Also can be forced to on with restore_powerstate=on. Note that this will only work if DRS is configured to fully automated, otherwise the API request to power on a VM will be ignored. Since Version >= 22.0.0

snapshot_retries (optional)

Number of retries when taking a snapshot fails (default: 3). The most common cause of snapshot failure is “error while quiescing the virtual machine”. In this case usually retrying helps. If not, also check if a pre-freeze script is used on the VM, as a non-zero exit code will cause a quiescing error. The pre-freeze and post-thaw scripts are executed by VMwareTools. Since Version >= 22.1.0

snapshot_retry_wait (optional)

Time in seconds to wait before the next snapshot retry (default: 5). Since Version >= 22.1.0

poweron_timeout (optional)

Timeout in seconds to wait for a VM to be powered on after restore, default 15s. When a VM is powered on after restore (see also the option restore_powerstate above), the plugin will check if it succeeded by checking the power state. If it is not powered on within this timeout, the restore job will issue a warning message.

enable_cbt (optional)

When using enable_cbt=yes, the plugin will enable CBT (changed block tracking) if possible and it is not yet enabled. It is required that no snapshot exists when enabling CBT, otherwise the plugin will emit an error message. By default this option is set to yes since Version >= 23.0.0 so that vmware_cbt_tool.py is no longer necessary to enable CBT. This option exists since Version >= 22.1.1

do_io_in_core (optional)

With the option do_io_in_core=yes the data stream from the bareos_vadp_dumper will be processed directly by the Bareos core via file descriptor. When set to no, the data stream is read and written by the Python plugin code from the file descriptor and exchanged with the core over a buffer. So enabling this can improve the performance and will save CPU resource consumption. See Python Plugin API for more details. By default this is set to yes. Since Version >= 23.0.0

vadp_dumper_multithreading (optional)

The option vadp_dumper_multithreading=yes enables multithreading when running bareos_vadp_dumper, so that it will run one reader and one writer thread. By default it is set to yes as nowadays CPUs usually have multiple cores, so this improves the performance in most cases. Since Version >= 23.0.0

vadp_dumper_sectors_per_call (optional)

This option can be used to optimize the performance. The default value it is set to 16384. This is the smallest value that achieved the maximum throughput in our benchmark tests. Together with vadp_dumper_multithreading=yes this setting can improve the backup performance significantly. Since Version >= 23.0.0

vadp_dumper_query_allocated_blocks_chunk_size (optional)

The bareos_vadp_dumper uses a VDDK function to query the allocated blocks of virtual disks since Version >= 23.0.0. Especially for full backups, this normally leads to smaller and implicitly faster backups. By setting this plugin option, the chunk size which is passed to that function can be set. The default value for the chunk size is 1024. Allowed values are powers of two between 128 and 131072, inclusive. In our benchmark tests with small VMs of 3GB size, this value did not have any performance impact. However, with more data it might have an impact on the backup performance.

fallback_to_full_cbt (optional)

In some situations requesting the CBT information for an incremental backup can fail, for example when the CBT information had to be reset. In that case, by default the plugin will fall back to request full level CBT information, which leads to a successful incremental backup job, but it will have the size of a full level backup job. As a consequence, restore time would increase, so a warning will be emitted in the job log, recommending a new full level backup. By setting fallback_to_full_cbt=no, the job will not request full level CBT and terminate immediately with failure instead. This can be used if it is desired to run a subsequent new full level backup. Note that this does not happen automatically, a new full level job must be run manually afterwards, but this can be automated by using a post backup script, for details see https://github.com/bareos/bareos/tree/master/contrib/misc/reschedule_job_as_full

Since Version >= 23.0.3

restore_allow_disks_mismatch (optional)

When using VSAN, restoring with recreating the VM can fail because the plugin detects a disk mismatch, as when using VSAN obviously recreated disks get a generated backing disk path. When passing the plugin option restore_allow_disks_mismatch=yes, this disk match check will allow a mismatch and continue the restore. This option will only be used when recreating the VM to be restored.

Since Version >= 23.0.4

uuid (deprecated)

The uuid option could be used instead of dc, folder and vmname to uniquely address a VM for backup. As the plugin since Version >= 22.0.0 is able recreate VMs in a different datacenter, folder or datastore, this option has become useless. When using uuid, restoring is only possible to the same still existing VM. It is recommended to change the configuration, as the uuid option will be dropped in the next version.

Grpc Plugin

The grpc plugin is a plugin that allows you to run a separate executable as a bareos plugin. This executable talks to the core via [grpc](https://grpc.io) remote procedure calls.

This has multiple upsides for users, such as:

  • A crash inside a plugin will not also crash the daemon,

  • eliminates some classes of concurrency problems related to the use of global state inside plugins,

This plugin on its own is not very useful. Its only a bridge between the bareos core and the actual plugin doing the work.

The plugin comes with two executables, grpc-test-module and grpc-python-module, which allow you to make use of this bridge. As the name suggests, grpc-test-module is a simple module that can be used to test that the bridge is working. grpc-python-module on the other hand is a handy little executable that can be used to load and run normal bareos plugins (including python plugins) in a separate process.

Status of the Grpc Plugin

This plugin is still in an experimental phase. The API between core and plugins may change at any time.

Installation of the Grpc Plugin

The grpc plugin, together with the grpc-test-module and grpc-python-module, can be installed with the bareos-filedaemon-gprc-plugin package.

Configuration of the Grpc Plugin

The Grpc Plugin receives as first argument the name of the executable that it should execute. This executable is assumed to be in the normal bareos plugin directory.

Grpc Python Plugin

The Grpc Python Plugin is a Grpc Plugin that can be used to start the python-fd Plugin (and other Bareos plugins) in a separate process. This ensures that no python state is shared between different jobs.

This plugin does not take any options, but expects its arguments to be a valid python-fd plugin definition.

Example

/etc/bareos/bareos-dir.d/fileset/GrpcPython.conf
FileSet {
  Name = "GrpcPython"
  Description = "Run a python plugin in a separate process"
  Include {
    Options {
      Signature = XXH128
    }
    Plugin = "grpc-fd"
             ":grpc-python-module"
             ":python3"
             ":module_name=pyplug"
             ":arg1=val1"
  }
}

This fileset will call the python-fd plugin pyplug with the argument arg1=val1 in a separate process.