Director Configuration
Of all the configuration files needed to run Bareos, the Director’s is the most complicated and the one that you will need to modify the most often as you add clients or modify the FileSets.
For a general discussion of configuration files and resources including the recognized data types see Customizing the Configuration.
Everything revolves around a job and is tied to a job in one way or another.
The Bareos Director knows about following resource types:
Director Resource – to define the Director’s name and its access password used for authenticating the Console program. Only a single Director resource definition may appear in the Director’s configuration file.
Job Resource – to define the backup/restore Jobs and to tie together the Client, FileSet and Schedule resources to be used for each Job. Normally, you will Jobs of different names corresponding to each client (i.e. one Job per client, but a different one with a different name for each client).
JobDefs Resource – optional resource for providing defaults for Job resources.
Schedule Resource – to define when a Job has to run. You may have any number of Schedules, but each job will reference only one.
FileSet Resource – to define the set of files to be backed up for each Client. You may have any number of FileSets but each Job will reference only one.
Client Resource – to define what Client is to be backed up. You will generally have multiple Client definitions. Each Job will reference only a single client.
Storage Resource – to define on what physical device the Volumes should be mounted. You may have one or more Storage definitions.
Pool Resource – to define the pool of Volumes that can be used for a particular Job. Most people use a single default Pool. However, if you have a large number of clients or volumes, you may want to have multiple Pools. Pools allow you to restrict a Job (or a Client) to use only a particular set of Volumes.
Catalog Resource – to define in what database to keep the list of files and the Volume names where they are backed up. Most people only use a single catalog. It is possible, however not adviced and not supported to use multiple catalogs, see Multiple Catalogs.
Messages Resource – to define where error and information messages are to be sent or logged. You may define multiple different message resources and hence direct particular classes of messages to different users or locations (files, …).
Director Resource
The Director resource defines the attributes of the Directors running on the network. Only a single Director resource is allowed.
The following is an example of a valid Director resource definition:
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
|||
= |
no |
||
= |
|||
= |
9101 |
||
9101 |
|||
= |
9101 |
||
= |
0 |
||
= |
no |
||
= |
180 |
||
= |
0 |
||
= |
%d-%b %H:%M |
||
= |
1 |
||
= |
20 |
||
= |
|||
= |
required |
||
= |
4 |
||
|
= |
no |
|
= |
|||
= |
no |
||
= |
no |
||
required |
|||
required |
|||
= |
1800 |
||
= |
|||
= |
0 |
deprecated |
|
= |
160704000 |
deprecated |
|
= |
0 |
||
= |
no |
||
= |
yes |
||
= |
|||
= |
yes |
||
= |
no |
||
= |
|||
/var/lib/bareos (platform specific) |
- Absolute Job Timeout
- Type:
- Since Version:
14.2.0
Absolute time after which a Job gets terminated regardless of its progress
- Audit Events
- Type:
- Since Version:
14.2.0
Specify which commands (see Console Commands) will be audited. If nothing is specified (and
Auditing (Dir->Director)
is enabled), all commands will be audited.
- Auditing
- Type:
- Default value:
no
- Since Version:
14.2.0
This directive allows to en- or disable auditing of interaction with the Bareos Director. If enabled, audit messages will be generated. The messages resource configured in
Messages (Dir->Director)
defines, how these messages are handled.
- Description
- Type:
The text field contains a description of the Director that will be displayed in the graphical user interface. This directive is optional.
- Dir Address
- Type:
- Default value:
9101
This directive is optional, but if it is specified, it will cause the Director server (for the Console program) to bind to the specified address. If this and the
Dir Addresses (Dir->Director)
directives are not specified, the Director will bind to both IPv6 and IPv4 default addresses (the default).
- Dir Addresses
- Type:
- Default value:
9101
Specify the ports and addresses on which the Director daemon will listen for Bareos Console connections.
Please note that if you use the
Dir Addresses (Dir->Director)
directive, you must not use either aDir Port (Dir->Director)
or aDir Address (Dir->Director)
directive in the same resource.
- Dir Port
- Type:
- Default value:
9101
Specify the port on which the Director daemon will listen for Bareos Console connections. This same port number must be specified in the Director resource of the Console configuration file. This directive should not be used if you specify
Dir Addresses (Dir->Director)
(N.B plural) directive.By default, the Director will listen to both IPv6 and IPv4 default addresses on the port you set. If you want to listen on either IPv4 or IPv6 only, you have to specify it with either
Dir Address (Dir->Director)
, or removeDir Port (Dir->Director)
and just useDir Addresses (Dir->Director)
instead.
- Dir Source Address
- Type:
- Default value:
0
This record is optional, and if it is specified, it will cause the Director server (when initiating connections to a storage or file daemon) to source its connections from the specified address. Only a single IP address may be specified. If this record is not specified, the Director server will source its outgoing connections according to the system routing table (the default).
- Enable kTLS
- Type:
- Default value:
no
If set to “yes”, Bareos will allow the SSL implementation to use Kernel TLS.
- FD Connect Timeout
- Type:
- Default value:
180
where time is the time that the Director should continue attempting to contact the File daemon to start a job, and after which the Director will cancel the job.
- Heartbeat Interval
- Type:
- Default value:
0
Optional and if specified set a keepalive interval (heartbeat) on the sockets used by the Bareos Director.
See details in Heartbeat Interval - TCP Keepalive.
- Key Encryption Key
- Type:
This key is used to encrypt the Security Key that is exchanged between the Director and the Storage Daemon for supporting Application Managed Encryption (AME). For security reasons each Director should have a different Key Encryption Key.
- Log Timestamp Format
- Type:
- Default value:
%d-%b %H:%M
- Since Version:
15.2.3
This parameter needs to be a valid strftime format string. See man 3 strftime for the full list of available substitution variables.
- Maximum Concurrent Jobs
- Type:
- Default value:
1
This directive specifies the maximum number of total Director Jobs that should run concurrently.
See also the section about Concurrent Jobs.
- Maximum Console Connections
- Type:
- Default value:
20
This directive specifies the maximum number of Console Connections that could run concurrently.
- Messages
- Type:
The messages resource specifies where to deliver Director messages that are not associated with a specific Job. Most messages are specific to a job and will be directed to the Messages resource specified by the job. However, there are a messages that can occur when no job is running.
- Name
- Required:
True
- Type:
The name of the resource.
The director name used by the system administrator.
- NDMP Log Level
- Type:
- Default value:
4
- Since Version:
13.2.0
This directive sets the loglevel for the NDMP protocol library.
- NDMP Namelist Fhinfo Set Zero For Invalid Uquad
- Type:
- Default value:
no
- Since Version:
20.0.6
This directive enables a bug workaround for Isilon 9.1.0.0 systems where the NDMP namelists tape offset (also known as fhinfo) is sanity checked resulting in valid value -1 being no more accepted. The Isilon system sends the following error message: ‘Invalid nlist.tape_offset -1 at index 1 - tape offset not aligned at 512B boundary’. The workaround sends 0 instead of -1 which is accepted by the Isilon system and enables normal operation again.
- NDMP Snooping
- Type:
- Since Version:
13.2.0
This directive enables the Snooping and pretty printing of NDMP protocol information in debugging mode.
- Optimize For Size
- Type:
- Default value:
no
If set to yes this directive will use the optimizations for memory size over speed. So it will try to use less memory which may lead to a somewhat lower speed. Its currently mostly used for keeping all hard links in memory.
If none of
Optimize For Size (Dir->Director)
andOptimize For Speed (Dir->Director)
is enabled,Optimize For Size (Dir->Director)
is enabled by default.
- Optimize For Speed
- Type:
- Default value:
no
If set to yes this directive will use the optimizations for speed over the memory size. So it will try to use more memory which lead to a somewhat higher speed. Its currently mostly used for keeping all hard links in memory. Its relates to the
Optimize For Size (Dir->Director)
option set either one to yes as they are mutually exclusive.
- Password
- Required:
True
- Type:
Specifies the password that must be supplied for the default Bareos Console to be authorized. This password correspond to
Password (Console->Director)
of the Console configuration file.The password is plain text.
- Plugin Directory
- Type:
- Since Version:
14.2.0
Plugins are loaded from this directory. To load only specific plugins, use ‘Plugin Names’.
- Plugin Names
- Type:
- Since Version:
14.2.0
List of plugins, that should get loaded from ‘Plugin Directory’ (only basenames, ‘-dir.so’ is added automatically). If empty, all plugins will get loaded.
- Query File
- Required:
True
- Type:
This directive is required and specifies a directory and file in which the Director can find the canned SQL statements for the query command.
- SD Connect Timeout
- Type:
- Default value:
1800
where time is the time that the Director should continue attempting to contact the Storage daemon to start a job, and after which the Director will cancel the job.
- Secure Erase Command
- Type:
- Since Version:
15.2.1
Specify command that will be called when bareos unlinks files.
When files are no longer needed, Bareos will delete (unlink) them. With this directive, it will call the specified command to delete these files. See Secure Erase Command for details.
- Statistics Collect Interval
- Type:
- Default value:
0
- Since Version:
deprecated
Bareos offers the possibility to collect statistic information from its connected devices. To do so,
Collect Statistics (Dir->Storage)
must be enabled. This interval defines, how often the Director connects to the attached Storage Daemons to collect the statistic information.
- Statistics Retention
- Type:
- Default value:
160704000
- Since Version:
deprecated
The Statistics Retention directive defines the length of time that Bareos will keep statistics job records in the Catalog database after the Job End time (in the catalog JobHisto table). When this time period expires, and if user runs prune stats command, Bareos will prune (remove) Job records that are older than the specified period.
Theses statistics records aren’t use for restore purpose, but mainly for capacity planning, billings, etc. See chapter Job Statistics for additional information.
- Subscriptions
- Type:
- Default value:
0
- Since Version:
12.4.4
In case you want check that the number of active clients don’t exceed a specific number, you can define this number here and check with the status subscriptions command.
However, this is only intended to give a hint. No active limiting is implemented.
- TLS Allowed CN
- Type:
“Common Name”s (CNs) of the allowed peer certificates.
- TLS Cipher Suites
- Type:
Colon separated list of valid TLSv1.3 Ciphers; see openssl ciphers -s -tls1_3. Leftmost element has the highest priority. Currently only SHA256 ciphers are supported.
- TLS DH File
- Type:
Path to PEM encoded Diffie-Hellman parameter file. If this directive is specified, DH key exchange will be used for the ephemeral keying, allowing for forward secrecy of communications.
- TLS Enable
- Type:
- Default value:
yes
Enable TLS support.
Bareos can be configured to encrypt all its network traffic. See chapter TLS Configuration Directives to see, how the Bareos Director (and the other components) must be configured to use TLS.
- TLS Key
- Type:
Path of a PEM encoded private key. It must correspond to the specified “TLS Certificate”.
- TLS Require
- Type:
- Default value:
yes
If set to “no”, Bareos can fall back to use unencrypted connections.
- TLS Verify Peer
- Type:
- Default value:
no
If disabled, all certificates signed by a known CA will be accepted. If enabled, the CN of a certificate must the Address or in the “TLS Allowed CN” list.
- Ver Id
- Type:
where string is an identifier which can be used for support purpose. This string is displayed using the version command.
- Working Directory
- Type:
- Default value:
/var/lib/bareos (platform specific)
This directive is optional and specifies a directory in which the Director may put its status files. This directory should be used only by Bareos but may be shared by other Bareos daemons. Standard shell expansion of the directory is done when the configuration file is read so that values such as
$HOME
will be properly expanded.
The working directory specified must already exist and be readable and writable by the Bareos daemon referencing it.
Job Resource
The Job resource defines a Job (Backup, Restore, …) that Bareos must perform. Each Job resource definition contains the name of a Client and a FileSet to backup, the Schedule for the Job, where the data are to be stored, and what media Pool can be used. In effect, each Job resource must specify What, Where, How, and When or FileSet, Storage, Backup/Restore/Level, and Schedule respectively. Note, the FileSet must be specified for a restore job for historical reasons, but it is no longer used.
Only a single type (Backup, Restore, …) can be specified for any job. If you want to backup multiple FileSets on the same Client or multiple Clients, you must define a Job for each one.
Note, you define only a single Job to do the Full, Differential, and Incremental backups since the different backup levels are tied together by a unique Job name. Normally, you will have only one Job per Client, but if a client has a really huge number of files (more than several million), you might want to split it into several Jobs each with a different FileSet covering only parts of the total files.
Multiple Storage daemons are not currently supported for Jobs, if you do want to use multiple storage daemons, you will need to create a different Job and ensure the combination of Client and FileSet is unique.
Warning
Bareos uses only Client (Dir->Job)
and File Set (Dir->Job)
to determine which jobids belong together.
If job A and B have the same client and fileset defined, the resulting jobids will be intermixed as follows:
When a job determines its predecessor to determine its required level and since-time, it will consider all jobs with the same client and fileset.
When restoring a client you select the fileset and all jobs that used that fileset will be considered.
As a matter of fact, if you want separate backups, you have to duplicate your filesets with a different name and the same content.
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
no |
||
= |
|||
= |
|||
= |
yes |
||
= |
yes |
||
= |
no |
||
= |
no |
||
= |
0 |
||
= |
0 |
||
= |
|||
= |
Native |
||
deprecated |
|||
= |
no |
||
= |
no |
||
= |
no |
||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
yes |
||
= |
10000000 |
||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
100 |
||
= |
|||
= |
0 |
||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
1 |
||
= |
required |
||
= |
required |
||
= |
|||
= |
required |
||
= |
yes |
||
= |
no |
||
= |
10 |
||
Native |
|||
= |
no |
||
= |
no |
||
= |
no |
||
= |
no |
||
= |
|||
Always |
|||
= |
no |
||
= |
1800 |
||
= |
no |
||
= |
5 |
||
= |
0 |
||
{ |
|||
= |
yes |
||
= |
|||
= |
|||
= |
no |
||
= |
no |
||
= |
|||
= |
|||
= |
required |
||
= |
alias |
||
= |
|||
- Accurate
- Type:
- Default value:
no
In accurate mode, the File daemon knowns exactly which files were present after the last backup. So it is able to handle deleted or renamed files.
When restoring a FileSet for a specified date (including “most recent”), Bareos is able to restore exactly the files and directories that existed at the time of the last backup prior to that date including ensuring that deleted files are actually deleted, and renamed directories are restored properly.
When doing VirtualFull backups, it is advised to use the accurate mode, otherwise the VirtualFull might contain already deleted files.
However, using the accurate mode has also disadvantages:
The File daemon must keep data concerning all files in memory. So If you do not have sufficient memory, the backup may either be terribly slow or fail. For 500.000 files (a typical desktop linux system), it will require approximately 64 Megabytes of RAM on your File daemon to hold the required information.
- Add Prefix
- Type:
This directive applies only to a Restore job and specifies a prefix to the directory name of all files being restored. This will use File Relocation feature.
- Add Suffix
- Type:
This directive applies only to a Restore job and specifies a suffix to all files being restored. This will use File Relocation feature.
Using
Add Suffix=.old
,/etc/passwd
will be restored to/etc/passwsd.old
- Allow Duplicate Jobs
- Type:
- Default value:
yes
A duplicate job in the sense we use it here means a second or subsequent job with the same name starts. This happens most frequently when the first job runs longer than expected because no tapes are available.
If this directive is enabled duplicate jobs will be run. If the directive is set to no then only one job of a given name may run at one time. The action that Bareos takes to ensure only one job runs is determined by the directives
If none of these directives is set to yes, Allow Duplicate Jobs is set to no and two jobs are present, then the current job (the second one started) will be cancelled.
Virtual backup jobs of a consolidation are not affected by the directive. In those cases the directive is going to be ignored.
- Allow Mixed Priority
- Type:
- Default value:
no
When set to yes, this job may run even if lower priority jobs are already running. This means a high priority job will not have to wait for other jobs to finish before starting. The scheduler will only mix priorities when all running jobs have this set to true.
Note that only higher priority jobs will start early. Suppose the director will allow two concurrent jobs, and that two jobs with priority 10 are running, with two more in the queue. If a job with priority 5 is added to the queue, it will be run as soon as one of the running jobs finishes. However, new priority 10 jobs will not be run until the priority 5 job has finished.
- Always Incremental
- Type:
- Default value:
no
- Since Version:
16.2.4
Enable/disable always incremental backup scheme.
- Always Incremental Job Retention
- Type:
- Default value:
0
- Since Version:
16.2.4
Backup Jobs older than the specified time duration will be merged into a new Virtual backup.
- Always Incremental Keep Number
- Type:
- Default value:
0
- Since Version:
16.2.4
Guarantee that at least the specified number of Backup Jobs will persist, even if they are older than “Always Incremental Job Retention”.
- Always Incremental Max Full Age
- Type:
- Since Version:
16.2.4
If “AlwaysIncrementalMaxFullAge” is set, during consolidations only incremental backups will be considered while the Full Backup remains to reduce the amount of data being consolidated. Only if the Full Backup is older than “AlwaysIncrementalMaxFullAge”, the Full Backup will be part of the consolidation to avoid the Full Backup becoming too old .
- Backup Format
- Type:
- Default value:
Native
The backup format used for protocols which support multiple formats. By default, it uses the Bareos Native Backup format. Other protocols, like NDMP supports different backup formats for instance:
Dump
Tar
SMTape
- Base
- Type:
- Since Version:
deprecated
The Base directive permits to specify the list of jobs that will be used during Full backup as base. This directive is optional. See the Base Job chapter for more information.
- Bootstrap
- Type:
The Bootstrap directive specifies a bootstrap file that, if provided, will be used during Restore Jobs and is ignored in other Job types. The bootstrap file contains the list of tapes to be used in a restore Job as well as which files are to be restored. Specification of this directive is optional, and if specified, it is used only for a restore job. In addition, when running a Restore job from the console, this value can be changed.
If you use the restore command in the Console program, to start a restore job, the bootstrap file will be created automatically from the files you select to be restored.
For additional details see The Bootstrap File chapter.
- Cancel Lower Level Duplicates
- Type:
- Default value:
no
If
Allow Duplicate Jobs (Dir->Job)
is set to no and this directive is set to yes, Bareos will choose between duplicated jobs the one with the highest level. For example, it will cancel a previous Incremental to run a Full backup. It works only for Backup jobs. If the levels of the duplicated jobs are the same, nothing is done and the directivesCancel Queued Duplicates (Dir->Job)
andCancel Running Duplicates (Dir->Job)
will be examined.
- Cancel Queued Duplicates
- Type:
- Default value:
no
If
Allow Duplicate Jobs (Dir->Job)
is set to no and if this directive is set to yes any job that is already queued to run but not yet running will be canceled.
- Cancel Running Duplicates
- Type:
- Default value:
no
If
Allow Duplicate Jobs (Dir->Job)
is set to no and if this directive is set to yes any job that is already running will be canceled.
- Catalog
- Type:
- Since Version:
13.4.0
This specifies the name of the catalog resource to be used for this Job. When a catalog is defined in a Job it will override the definition in the client.
- Client
- Type:
The Client directive specifies the Client (File daemon) that will be used in the current Job. Only a single Client may be specified in any one Job. The Client runs on the machine to be backed up, and sends the requested files to the Storage daemon for backup, or receives them when restoring. For additional details, see the Client Resource of this chapter. This directive is required For versions before 13.3.0, this directive is required for all Jobtypes. For Version >= 13.3.0 it is required for all Jobtypes but Copy or Migrate jobs.
- Client Run After Job
- Type:
This is a shortcut for the
Run Script (Dir->Job)
resource, that run on the client after a backup job.
- Client Run Before Job
- Type:
This is basically a shortcut for the
Run Script (Dir->Job)
resource, that run on the client before the backup job.Warning
For compatibility reasons, with this shortcut, the command is executed directly when the client receive it. And if the command is in error, other remote runscripts will be discarded. To be sure that all commands will be sent and executed, you have to use
Run Script (Dir->Job)
syntax.
- Differential Backup Pool
- Type:
The Differential Backup Pool specifies a Pool to be used for Differential backups. It will override any
Pool (Dir->Job)
specification during a Differential backup.
- Differential Max Runtime
- Type:
The time specifies the maximum allowed time that a Differential backup job may run, counted from when the job starts (not necessarily the same as when the job was scheduled).
- Dir Plugin Options
- Type:
These settings are plugin specific, see Director Plugins.
- Enabled
- Type:
- Default value:
yes
En- or disable this resource.
This directive allows you to enable or disable automatic execution via the scheduler of a Job.
- FD Plugin Options
- Type:
These settings are plugin specific, see File Daemon Plugins.
- File History Size
- Type:
- Default value:
10000000
- Since Version:
15.2.4
When using NDMP and
Save File History (Dir->Job)
is enabled, this directives controls the size of the internal temporary database (LMDB) to translate NDMP file and directory information into Bareos file and directory information.File History Size must be greater the number of directories + files of this NDMP backup job.
Warning
This uses a large memory mapped file (File History Size * 256 => around 2,3 GB for the File History Size = 10000000). On 32-bit systems or if a memory limit for the user running the Bareos Director (normally bareos) exists (verify by su - bareos -s /bin/sh -c "ulimit -a"), this may fail.
- File Set
- Type:
The FileSet directive specifies the FileSet that will be used in the current Job. The FileSet specifies which directories (or files) are to be backed up, and what options to use (e.g. compression, …). Only a single FileSet resource may be specified in any one Job. For additional details, see the FileSet Resource section of this chapter. This directive is required (For versions before 13.3.0 for all Jobtypes and for versions after that for all Jobtypes but Copy and Migrate).
- Full Backup Pool
- Type:
The Full Backup Pool specifies a Pool to be used for Full backups. It will override any
Pool (Dir->Job)
specification during a Full backup.
- Full Max Runtime
- Type:
The time specifies the maximum allowed time that a Full backup job may run, counted from when the job starts (not necessarily the same as when the job was scheduled).
- Incremental Backup Pool
- Type:
The Incremental Backup Pool specifies a Pool to be used for Incremental backups. It will override any
Pool (Dir->Job)
specification during an Incremental backup.
- Incremental Max Runtime
- Type:
The time specifies the maximum allowed time that an Incremental backup job may run, counted from when the job starts, (not necessarily the same as when the job was scheduled).
- Job Defs
- Type:
If a Job Defs resource name is specified, all the values contained in the named Job Defs resource will be used as the defaults for the current Job. Any value that you explicitly define in the current Job resource, will override any defaults specified in the Job Defs resource. The use of this directive permits writing much more compact Job resources where the bulk of the directives are defined in one or more Job Defs. This is particularly useful if you have many similar Jobs but with minor variations such as different Clients. To structure the configuration even more, Job Defs themselves can also refer to other Job Defs.
Warning
If a parameter like RunScript for example can be specified multiple times, the configuration will be added instead of overridden as described above. Therefore, if one RunScript is defined in the JobDefs and another in the job, both will be executed.
- Level
- Type:
The Level directive specifies the default Job level to be run. Each different
Type (Dir->Job)
(Backup, Restore, Verify, …) has a different set of Levels that can be specified. The Level is normally overridden by a different value that is specified in the Schedule Resource. This directive is not required, but must be specified either by this directive or as an override specified in the Schedule Resource.- Backup
For a Backup Job, the Level may be one of the following:
- Full
When the Level is set to Full all files in the FileSet whether or not they have changed will be backed up.
- Incremental
When the Level is set to Incremental all files specified in the FileSet that have changed since the last successful backup of the the same Job using the same FileSet and Client, will be backed up. If the Director cannot find a previous valid Full backup then the job will be upgraded into a Full backup. When the Director looks for a valid backup record in the catalog database, it looks for a previous Job with:
The same Job name.
The same Client name.
The same FileSet (any change to the definition of the FileSet such as adding or deleting a file in the Include or Exclude sections constitutes a different FileSet.
The Job was a Full, Differential, or Incremental backup.
The Job terminated normally (i.e. did not fail or was not canceled).
The Job started no longer ago than Max Full Interval.
If all the above conditions do not hold, the Director will upgrade the Incremental to a Full save. Otherwise, the Incremental backup will be performed as requested.
The File daemon (Client) decides which files to backup for an Incremental backup by comparing start time of the prior Job (Full, Differential, or Incremental) against the time each file was last “modified” (st_mtime) and the time its attributes were last “changed”(st_ctime). If the file was modified or its attributes changed on or after this start time, it will then be backed up.
Some virus scanning software may change st_ctime while doing the scan. For example, if the virus scanning program attempts to reset the access time (st_atime), which Bareos does not use, it will cause st_ctime to change and hence Bareos will backup the file during an Incremental or Differential backup. In the case of Sophos virus scanning, you can prevent it from resetting the access time (st_atime) and hence changing st_ctime by using the –no-reset-atime option. For other software, please see their manual.
When Bareos does an Incremental backup, all modified files that are still on the system are backed up. However, any file that has been deleted since the last Full backup remains in the Bareos catalog, which means that if between a Full save and the time you do a restore, some files are deleted, those deleted files will also be restored. The deleted files will no longer appear in the catalog after doing another Full save.
In addition, if you move a directory rather than copy it, the files in it do not have their modification time (st_mtime) or their attribute change time (st_ctime) changed. As a consequence, those files will probably not be backed up by an Incremental or Differential backup which depend solely on these time stamps. If you move a directory, and wish it to be properly backed up, it is generally preferable to copy it, then delete the original.
However, to manage deleted files or directories changes in the catalog during an Incremental backup you can use Accurate mode. This is quite memory consuming process.
- Differential
When the Level is set to Differential all files specified in the FileSet that have changed since the last successful Full backup of the same Job will be backed up. If the Director cannot find a valid previous Full backup for the same Job, FileSet, and Client, backup, then the Differential job will be upgraded into a Full backup. When the Director looks for a valid Full backup record in the catalog database, it looks for a previous Job with:
The same Job name.
The same Client name.
The same FileSet (any change to the definition of the FileSet such as adding or deleting a file in the Include or Exclude sections constitutes a different FileSet.
The Job was a FULL backup.
The Job terminated normally (i.e. did not fail or was not canceled).
The Job started no longer ago than Max Full Interval.
If all the above conditions do not hold, the Director will upgrade the Differential to a Full save. Otherwise, the Differential backup will be performed as requested.
The File daemon (Client) decides which files to backup for a differential backup by comparing the start time of the prior Full backup Job against the time each file was last “modified” (st_mtime) and the time its attributes were last “changed” (st_ctime). If the file was modified or its attributes were changed on or after this start time, it will then be backed up. The start time used is displayed after the Since on the Job report. In rare cases, using the start time of the prior backup may cause some files to be backed up twice, but it ensures that no change is missed.
When Bareos does a Differential backup, all modified files that are still on the system are backed up. However, any file that has been deleted since the last Full backup remains in the Bareos catalog, which means that if between a Full save and the time you do a restore, some files are deleted, those deleted files will also be restored. The deleted files will no longer appear in the catalog after doing another Full save. However, to remove deleted files from the catalog during a Differential backup is quite a time consuming process and not currently implemented in Bareos. It is, however, a planned future feature.
As noted above, if you move a directory rather than copy it, the files in it do not have their modification time (st_mtime) or their attribute change time (st_ctime) changed. As a consequence, those files will probably not be backed up by an Incremental or Differential backup which depend solely on these time stamps. If you move a directory, and wish it to be properly backed up, it is generally preferable to copy it, then delete the original. Alternatively, you can move the directory, then use the touch program to update the timestamps.
However, to manage deleted files or directories changes in the catalog during an Differential backup you can use Accurate mode. This is quite memory consuming process. See for more details.
Every once and a while, someone asks why we need Differential backups as long as Incremental backups pickup all changed files. There are possibly many answers to this question, but the one that is the most important for me is that a Differential backup effectively merges all the Incremental and Differential backups since the last Full backup into a single Differential backup. This has two effects: 1. It gives some redundancy since the old backups could be used if the merged backup cannot be read. 2. More importantly, it reduces the number of Volumes that are needed to do a restore effectively eliminating the need to read all the volumes on which the preceding Incremental and Differential backups since the last Full are done.
- VirtualFull
When the Level is set to VirtualFull, a new Full backup is generated from the last existing Full backup and the matching Differential- and Incremental-Backups. It matches this according the
Name (Dir->Client)
andName (Dir->Fileset)
. This means, a new Full backup get created without transfering all the data from the client to the backup server again. The new Full backup will be stored in the pool defined inNext Pool (Dir->Pool)
.Warning
Opposite to the other backup levels, VirtualFull may require read and write access to multiple volumes. In most cases you have to make sure, that Bareos does not try to read and write to the same Volume. With Virtual Full, you are restricted to use the same Bareos Storage Daemon for the source and the destination, because the restore bsr file created for the job can only be read by one storage daemon at a time.
- Restore
For a Restore Job, no level needs to be specified.
- Verify
For a Verify Job, the Level may be one of the following:
- InitCatalog
does a scan of the specified FileSet and stores the file attributes in the Catalog database. Since no file data is saved, you might ask why you would want to do this. It turns out to be a very simple and easy way to have a Tripwire like feature using Bareos. In other words, it allows you to save the state of a set of files defined by the FileSet and later check to see if those files have been modified or deleted and if any new files have been added. This can be used to detect system intrusion. Typically you would specify a FileSet that contains the set of system files that should not change (e.g. /sbin, /boot, /lib, /bin, …). Normally, you run the InitCatalog level verify one time when your system is first setup, and then once again after each modification (upgrade) to your system. Thereafter, when your want to check the state of your system files, you use a Verify level = Catalog. This compares the results of your InitCatalog with the current state of the files.
- Catalog
Compares the current state of the files against the state previously saved during an InitCatalog. Any discrepancies are reported. The items reported are determined by the verify options specified on the Include directive in the specified FileSet (see the FileSet resource below for more details). Typically this command will be run once a day (or night) to check for any changes to your system files.
Warning
If you run two Verify Catalog jobs on the same client at the same time, the results will certainly be incorrect. This is because Verify Catalog modifies the Catalog database while running in order to track new files.
- VolumeToCatalog
This level causes Bareos to read the file attribute data written to the Volume from the last backup Job for the job specified on the VerifyJob directive. The file attribute data are compared to the values saved in the Catalog database and any differences are reported. This is similar to the DiskToCatalog level except that instead of comparing the disk file attributes to the catalog database, the attribute data written to the Volume is read and compared to the catalog database. Although the attribute data including the signatures (MD5 or SHA1) are compared, the actual file data is not compared (it is not in the catalog).
VolumeToCatalog jobs require a client to extract the metadata, but this client does not have to be the original client. We suggest to use the client on the backup server itself for maximum performance.
Warning
If you run two Verify VolumeToCatalog jobs on the same client at the same time, the results will certainly be incorrect. This is because the Verify VolumeToCatalog modifies the Catalog database while running.
Limitation: Verify VolumeToCatalog does not check file checksums
When running a Verify VolumeToCatalog job the file data will not be checksummed and compared with the recorded checksum. As a result, file data errors that are introduced between the checksumming in the Bareos File Daemon and the checksumming of the block by the Bareos Storage Daemon will not be detected.
- DiskToCatalog
This level causes Bareos to read the files as they currently are on disk, and to compare the current file attributes with the attributes saved in the catalog from the last backup for the job specified on the VerifyJob directive. This level differs from the VolumeToCatalog level described above by the fact that it doesn’t compare against a previous Verify job but against a previous backup. When you run this level, you must supply the verify options on your Include statements. Those options determine what attribute fields are compared.
This command can be very useful if you have disk problems because it will compare the current state of your disk against the last successful backup, which may be several jobs.
Note, the current implementation does not identify files that have been deleted.
- Max Diff Interval
- Type:
The time specifies the maximum allowed age (counting from start time) of the most recent successful Differential backup that is required in order to run Incremental backup jobs. If the most recent Differential backup is older than this interval, Incremental backups will be upgraded to Differential backups automatically. If this directive is not present, or specified as 0, then the age of the previous Differential backup is not considered.
- Max Full Consolidations
- Type:
- Default value:
0
- Since Version:
16.2.4
If “AlwaysIncrementalMaxFullAge” is configured, do not run more than “MaxFullConsolidations” consolidation jobs that include the Full backup.
- Max Full Interval
- Type:
The time specifies the maximum allowed age (counting from start time) of the most recent successful Full backup that is required in order to run Incremental or Differential backup jobs. If the most recent Full backup is older than this interval, Incremental and Differential backups will be upgraded to Full backups automatically. If this directive is not present, or specified as 0, then the age of the previous Full backup is not considered.
- Max Run Sched Time
- Type:
The time specifies the maximum allowed time that a job may run, counted from when the job was scheduled. This can be useful to prevent jobs from running during working hours. We can see it like
Max Start Delay + Max Run Time
.
- Max Run Time
- Type:
The time specifies the maximum allowed time that a job may run, counted from when the job starts, (not necessarily the same as when the job was scheduled).
By default, the watchdog thread will kill any Job that has run more than 6 days. The maximum watchdog timeout is independent of Max Run Time and cannot be changed.
- Max Start Delay
- Type:
The time specifies the maximum delay between the scheduled time and the actual start time for the Job. For example, a job can be scheduled to run at 1:00am, but because other jobs are running, it may wait to run. If the delay is set to 3600 (one hour) and the job has not begun to run by 2:00am, the job will be canceled. This can be useful, for example, to prevent jobs from running during day time hours. The default is no limit.
- Max Virtual Full Interval
- Type:
- Since Version:
14.4.0
The time specifies the maximum allowed age (counting from start time) of the most recent successful Virtual Full backup that is required in order to run Incremental or Differential backup jobs. If the most recent Virtual Full backup is older than this interval, Incremental and Differential backups will be upgraded to Virtual Full backups automatically. If this directive is not present, or specified as 0, then the age of the previous Virtual Full backup is not considered.
- Max Wait Time
- Type:
The time specifies the maximum allowed time that a job may block waiting for a resource (such as waiting for a tape to be mounted, or waiting for the storage or file daemons to perform their duties), counted from the when the job starts, (not necessarily the same as when the job was scheduled).
- Maximum Bandwidth
- Type:
The speed parameter specifies the maximum allowed bandwidth that a job may use.
- Maximum Concurrent Jobs
- Type:
- Default value:
1
Specifies the maximum number of Jobs from the current Job resource that can run concurrently. Note, this directive limits only Jobs with the same name as the resource in which it appears. Any other restrictions on the maximum concurrent jobs such as in the Director, Client or Storage resources will also apply in addition to the limit specified here.
For details, see the Concurrent Jobs chapter.
- Messages
- Required:
True
- Type:
The Messages directive defines what Messages resource should be used for this job, and thus how and where the various messages are to be delivered. For example, you can direct some messages to a log file, and others can be sent by email. For additional details, see the Messages Resource Chapter of this manual. This directive is required.
- Name
- Required:
True
- Type:
The name of the resource.
The Job name. This name can be specified on the Run command in the console program to start a job. If the name contains spaces, it must be specified between quotes. It is generally a good idea to give your job the same name as the Client that it will backup. This permits easy identification of jobs.
When the job actually runs, the unique Job Name will consist of the name you specify here followed by the date and time the job was scheduled for execution.
It is recommended to limit job names to 98 characters. Higher is always possible, but when the job is run, its name will be truncated to accomodate certain protocol limitations, as well as the above mentioned date and time.
- Pool
- Required:
True
- Type:
The Pool directive defines the pool of Volumes where your data can be backed up. Many Bareos installations will use only the Default pool. However, if you want to specify a different set of Volumes for different Clients or different Jobs, you will probably want to use Pools. For additional details, see the Pool Resource of this chapter. This directive is required.
In case of a Copy or Migration job, this setting determines what Pool will be examined for finding JobIds to migrate. The exception to this is when
Selection Type (Dir->Job)
= SQLQuery, and although a Pool directive must still be specified, no Pool is used, unless you specifically include it in the SQL query. Note, in any case, the Pool resource defined by the Pool directive must contain aNext Pool (Dir->Pool)
= … directive to define the Pool to which the data will be migrated.
- Prefer Mounted Volumes
- Type:
- Default value:
yes
If the Prefer Mounted Volumes directive is set to yes, the Storage daemon is requested to select either an Autochanger or a drive with a valid Volume already mounted in preference to a drive that is not ready. This means that all jobs will attempt to append to the same Volume (providing the Volume is appropriate – right Pool, … for that job), unless you are using multiple pools. If no drive with a suitable Volume is available, it will select the first available drive. Note, any Volume that has been requested to be mounted, will be considered valid as a mounted volume by another job. This if multiple jobs start at the same time and they all prefer mounted volumes, the first job will request the mount, and the other jobs will use the same volume.
If the directive is set to no, the Storage daemon will prefer finding an unused drive, otherwise, each job started will append to the same Volume (assuming the Pool is the same for all jobs). Setting Prefer Mounted Volumes to no can be useful for those sites with multiple drive autochangers that prefer to maximize backup throughput at the expense of using additional drives and Volumes. This means that the job will prefer to use an unused drive rather than use a drive that is already in use.
Despite the above, we recommend against setting this directive to no since it tends to add a lot of swapping of Volumes between the different drives and can easily lead to deadlock situations in the Storage daemon. We will accept bug reports against it, but we cannot guarantee that we will be able to fix the problem in a reasonable time.
A better alternative for using multiple drives is to use multiple pools so that Bareos will be forced to mount Volumes from those Pools on different drives.
- Prefix Links
- Type:
- Default value:
no
If a Where path prefix is specified for a recovery job, apply it to absolute links as well. The default is No. When set to Yes then while restoring files to an alternate directory, any absolute soft links will also be modified to point to the new alternate directory. Normally this is what is desired – i.e. everything is self consistent. However, if you wish to later move the files to their original locations, all files linked with absolute names will be broken.
- Priority
- Type:
- Default value:
10
This directive permits you to control the order in which your jobs will be run by specifying a positive non-zero number. The higher the number, the lower the job priority. Assuming you are not running concurrent jobs, all queued jobs of priority 1 will run before queued jobs of priority 2 and so on, regardless of the original scheduling order.
The priority only affects waiting jobs that are queued to run, not jobs that are already running. If one or more jobs of priority 2 are already running, and a new job is scheduled with priority 1, the currently running priority 2 jobs must complete before the priority 1 job is run, unless Allow Mixed Priority is set.
If you want to run concurrent jobs you should keep these points in mind:
See Concurrent Jobs on how to setup concurrent jobs.
Bareos concurrently runs jobs of only one priority at a time. It will not simultaneously run a priority 1 and a priority 2 job.
If Bareos is running a priority 2 job and a new priority 1 job is scheduled, it will wait until the running priority 2 job terminates even if the Maximum Concurrent Jobs settings would otherwise allow two jobs to run simultaneously.
Suppose that bareos is running a priority 2 job and a new priority 1 job is scheduled and queued waiting for the running priority 2 job to terminate. If you then start a second priority 2 job, the waiting priority 1 job will prevent the new priority 2 job from running concurrently with the running priority 2 job. That is: as long as there is a higher priority job waiting to run, no new lower priority jobs will start even if the Maximum Concurrent Jobs settings would normally allow them to run. This ensures that higher priority jobs will be run as soon as possible.
If you have several jobs of different priority, it may not best to start them at exactly the same time, because Bareos must examine them one at a time. If by Bareos starts a lower priority job first, then it will run before your high priority jobs. If you experience this problem, you may avoid it by starting any higher priority jobs a few seconds before lower priority ones. This insures that Bareos will examine the jobs in the correct order, and that your priority scheme will be respected.
- Protocol
- Type:
- Default value:
Native
The backup protocol to use to run the Job. See dtProtocolType.
- Prune Files
- Type:
- Default value:
no
Normally, pruning of Files from the Catalog is specified on a Client by Client basis in
Auto Prune (Dir->Client)
. If this directive is specified and the value is yes, it will override the value specified in the Client resource.
- Prune Jobs
- Type:
- Default value:
no
Normally, pruning of Jobs from the Catalog is specified on a Client by Client basis in
Auto Prune (Dir->Client)
. If this directive is specified and the value is yes, it will override the value specified in the Client resource.
- Prune Volumes
- Type:
- Default value:
no
Normally, pruning of Volumes from the Catalog is specified on a Pool by Pool basis in
Auto Prune (Dir->Pool)
directive. Note, this is different from File and Job pruning which is done on a Client by Client basis. If this directive is specified and the value is yes, it will override the value specified in the Pool resource.
- Purge Migration Job
- Type:
- Default value:
no
This directive may be added to the Migration Job definition in the Director configuration file to purge the job migrated at the end of a migration.
- Regex Where
- Type:
This directive applies only to a Restore job and specifies a regex filename manipulation of all files being restored. This will use File Relocation feature.
For more informations about how use this option, see RegexWhere Format.
- Replace
- Type:
- Default value:
Always
This directive applies only to a Restore job and specifies what happens when Bareos wants to restore a file or directory that already exists. You have the following options for replace-option:
- always
when the file to be restored already exists, it is deleted and then replaced by the copy that was backed up. This is the default value.
- ifnewer
if the backed up file (on tape) is newer than the existing file, the existing file is deleted and replaced by the back up.
- ifolder
if the backed up file (on tape) is older than the existing file, the existing file is deleted and replaced by the back up.
- never
if the backed up file already exists, Bareos skips restoring this file.
- Rerun Failed Levels
- Type:
- Default value:
no
If this directive is set to yes (default no), and Bareos detects that a previous job at a higher level (i.e. Full or Differential) has failed, the current job level will be upgraded to the higher level. This is particularly useful for Laptops where they may often be unreachable, and if a prior Full save has failed, you wish the very next backup to be a Full save rather than whatever level it is started as.
There are several points that must be taken into account when using this directive: first, a failed job is defined as one that has not terminated normally, which includes any running job of the same name (you need to ensure that two jobs of the same name do not run simultaneously); secondly, the
Ignore File Set Changes (Dir->Fileset)
directive is not considered when checking for failed levels, which means that any FileSet change will trigger a rerun.
- Reschedule Interval
- Type:
- Default value:
1800
If you have specified Reschedule On Error = yes and the job terminates in error, it will be rescheduled after the interval of time specified by time-specification. See the time specification formats of
TIME
for details of time specifications. If no interval is specified, the job will not be rescheduled on error.
- Reschedule On Error
- Type:
- Default value:
no
If this directive is enabled, and the job terminates in error, the job will be rescheduled as determined by the
Reschedule Interval (Dir->Job)
andReschedule Times (Dir->Job)
directives. If you cancel the job, it will not be rescheduled.This specification can be useful for portables, laptops, or other machines that are not always connected to the network or switched on.
Warning
In case of Bareos Director crash, none of the running nor waiting jobs will be rescheduled.
- Reschedule Times
- Type:
- Default value:
5
This directive specifies the maximum number of times to reschedule the job. If it is set to zero the job will be rescheduled an indefinite number of times.
- Run
- Type:
The Run directive (not to be confused with the Run option in a Schedule) allows you to start other jobs or to clone the current jobs.
The part after the equal sign must be enclosed in double quotes, and can contain any string or set of options (overrides) that you can specify when entering the run command from the console. For example storage=DDS-4 …. In addition, there are two special keywords that permit you to clone the current job. They are level=%l and since=%s. The %l in the level keyword permits entering the actual level of the current job and the %s in the since keyword permits putting the same time for comparison as used on the current job. Note, in the case of the since keyword, the %s must be enclosed in double quotes, and thus they must be preceded by a backslash since they are already inside quotes. For example:
A cloned job will not start additional clones, so it is not possible to recurse.
Jobs started by
Run (Dir->Job)
are submitted for running before the original job (while it is being initialized). This means that any clone job will actually start before the original job, and may even block the original job from starting. It evens ignoresPriority (Dir->Job)
.If you are trying to prioritize jobs, you will find it much easier to do using a
Run Script (Dir->Job)
resource or aRun Before Job (Dir->Job)
directive.
- Run After Failed Job
- Type:
This is a shortcut for the
Run Script (Dir->Job)
resource, that runs a command after a failed job.If the exit code of the program run is non-zero, Bareos will print a warning message.
- Run After Job
- Type:
This is a shortcut for the
Run Script (Dir->Job)
resource, that runs a command after a successful job (without error or without being canceled).If the exit code of the program run is non-zero, Bareos will print a warning message.
- Run Before Job
- Type:
This is a shortcut for the
Run Script (Dir->Job)
resource, that runs a command before a job.If the exit code of the program run is non-zero, the current Bareos job will be canceled.
is equivalent to:
- Run On Incoming Connect Interval
- Type:
- Default value:
0
- Since Version:
19.2.4
The interval specifies the time between the most recent successful backup (counting from start time) and the event of a client initiated connection. When this interval is exceeded the job is started automatically.
- Run Script
- Type:
The RunScript directive behaves like a resource in that it requires opening and closing braces around a number of directives that make up the body of the runscript.
Command options specifies commands to run as an external program prior or after the current job.
Console options are special commands that are sent to the Bareos Director instead of the OS. Console command outputs are redirected to log with the jobid 0.
You can use following console command:
delete
,disable
,enable
,estimate
,list
,llist
,memory
,prune
,purge
,release
,reload
,status
,setdebug
,show
,time
,trace
,update
,version
,whoami
,.client
,.jobs
,.pool
,.storage
. See Bareos Console for more information. You need to specify needed information on command line, nothing will be prompted. Example:You can specify more than one Command/Console option per RunScript.
You can use following options may be specified in the body of the RunScript:
Options
Value
Description
Runs On Success
Yes | No
run if JobStatus is successful
Runs On Failure
Yes | No
run if JobStatus isn’t successful
Runs On Client
Yes | No
run a command on client (only for external commands - not console commands)
Runs When
Never |
Before
|After
|Always
|AfterVSS
When to run
Fail Job On Error
Yes | No
Fail job if script returns something different from 0
Command
External command (optional)
Console
Console command (optional)
Any output sent by the command to standard output will be included in the Bareos job report. The command string must be a valid program name or name of a shell script.
RunScript commands that are configured to run “before” a job, are also executed before the device reservation.
Warning
The command string is parsed then fed to the OS, which means that the path will be searched to execute your specified command, but there is no shell interpretation. As a consequence, if you invoke complicated commands or want any shell features such as redirection or piping, you must call a shell script and do it inside that script. Alternatively, it is possible to use sh -c '...' in the command definition to force shell interpretation, see example below.
Before executing the specified command, Bareos performs character substitution of the following characters:
%%
%
%b
Job Bytes
%B
Job Bytes in human readable format
%c
Client’s name
%d
Daemon’s name (Such as host-dir or host-fd)
%D
Director’s name (also valid on a Bareos File Daemon)
%e
Job Exit Status
%f
Job FileSet (only on director side)
%F
Job Files
%h
Client address
%i
Job Id
%j
Unique Job Id
%l
Job Level
%m
Modification time (only on Bareos File Daemon side for incremental and differential)
%n
Job name
%N
New Job Id (only on director side during migration/copy jobs)
%O
Previous Job Id (only on director side during migration/copy jobs)
%p
Pool name (only on director side)
%P
Daemon PID
%s
Since time
%t
Job type (Backup, …)
%v
Read Volume name(s) (only on director side)
%V
Write Volume name(s) (only on director side)
%w
Storage name (only on director side)
%x
Spooling enabled? (“yes” or “no”)
Some character substitutions are not available in all situations.
The Job Exit Status code %e edits the following values:
OK
Error
Fatal Error
Canceled
Differences
Unknown term code
Thus if you edit it on a command line, you will need to enclose it within some sort of quotes.
You can use these following shortcuts:
Keyword
RunsOnSuccess
RunsOnFailure
FailJobOnError
Runs On Client
RunsWhen
Yes
No
Before
Yes
No
No
After
No
Yes
No
After
Yes
Yes
Before
Yes
No
Yes
After
Examples:
Special Windows Considerations
You can run scripts just after snapshots initializations with AfterVSS keyword.
In addition, for a Windows client, please take note that you must ensure a correct path to your script. The script or program can be a .com, .exe or a .bat file. If you just put the program name in then Bareos will search using the same rules that cmd.exe uses (current directory, Bareos bin directory, and PATH). It will even try the different extensions in the same order as cmd.exe. The command can be anything that cmd.exe or command.com will recognize as an executable file.
However, if you have slashes in the program name then Bareos figures you are fully specifying the name, so you must also explicitly add the three character extension.
The command is run in a Win32 environment, so Unix like commands will not work unless you have installed and properly configured Cygwin in addition to and separately from Bareos.
The System %Path% will be searched for the command. (under the environment variable dialog you have have both System Environment and User Environment, we believe that only the System environment will be available to bareos-fd, if it is running as a service.)
System environment variables can be referenced with %var% and used as either part of the command name or arguments.
So if you have a script in the Bareos bin directory then the following lines should work fine:
The outer set of quotes is removed when the configuration file is parsed. You need to escape the inner quotes so that they are there when the code that parses the command line for execution runs so it can tell what the program name is.
The special characters
&<>()@^|
will need to be quoted, if they are part of a filename or argument.If someone is logged in, a blank “command” window running the commands will be present during the execution of the command.
Some Suggestions from Phil Stracchino for running on Win32 machines with the native Win32 Bareos File Daemon:
You might want the ClientRunBeforeJob directive to specify a .bat file which runs the actual client-side commands, rather than trying to run (for example) regedit /e directly.
The batch file should explicitly ’exit 0’ on successful completion.
The path to the batch file should be specified in Unix form:
Client Run Before Job = "c:/bareos/bin/systemstate.bat"
rather than DOS/Windows form:
INCORRECT:
Client Run Before Job = "c:\bareos\bin\systemstate.bat"
For Win32, please note that there are certain limitations:
Client Run Before Job = "C:/Program Files/Bareos/bin/pre-exec.bat"
Lines like the above do not work because there are limitations of cmd.exe that is used to execute the command. Bareos prefixes the string you supply with cmd.exe /c. To test that your command works you should type cmd /c "C:/Program Files/test.exe" at a cmd prompt and see what happens. Once the command is correct insert a backslash () before each double quote (“), and then put quotes around the whole thing when putting it in the Bareos Director configuration file. You either need to have only one set of quotes or else use the short name and don’t put quotes around the command path.
Below is the output from cmd’s help as it relates to the command line passed to the /c option.
If /C or /K is specified, then the remainder of the command line after the switch is processed as a command line, where the following logic is used to process quote (”) characters:
If all of the following conditions are met, then quote characters on the command line are preserved:
no /S switch.
exactly two quote characters.
no special characters between the two quote characters, where special is one of:
&<>()@^|
there are one or more whitespace characters between the the two quote characters.
the string between the two quote characters is the name of an executable file.
Otherwise, old behavior is to see if the first character is a quote character and if so, strip the leading character and remove the last quote character on the command line, preserving any text after the last quote character.
- Save File History
- Type:
- Default value:
yes
- Since Version:
14.2.0
Allow disabling storing the file history, as this causes problems problems with some implementations of NDMP (out-of-order metadata).
With
File History Size (Dir->Job)
the maximum number of files and directories inside a NDMP job can be configured.Warning
The File History is required to do a single file restore from NDMP backups. With this disabled, only full restores are possible.
- Schedule
- Type:
The Schedule directive defines what schedule is to be used for the Job. The schedule in turn determines when the Job will be automatically started and what Job level (i.e. Full, Incremental, …) is to be run. This directive is optional, and if left out, the Job can only be started manually using the Console program. Although you may specify only a single Schedule resource for any one job, the Schedule resource may contain multiple Run directives, which allow you to run the Job at many different times, and each run directive permits overriding the default Job Level Pool, Storage, and Messages resources. This gives considerable flexibility in what can be done with a single Job. For additional details, see Schedule Resource.
- SD Plugin Options
- Type:
These settings are plugin specific, see Storage Daemon Plugins.
- Selection Pattern
- Type:
Selection Patterns is only used for Copy and Migration jobs, see Migration and Copy. The interpretation of its value depends on the selected
Selection Type (Dir->Job)
.For the OldestVolume and SmallestVolume, this Selection pattern is not used (ignored).
For the Client, Volume, and Job keywords, this pattern must be a valid regular expression that will filter the appropriate item names found in the Pool.
For the SQLQuery keyword, this pattern must be a valid SELECT SQL statement that returns JobIds.
Example:
- Selection Type
- Type:
Selection Type is only used for Copy and Migration jobs, see Migration and Copy. It determines how a migration job will go about selecting what JobIds to migrate. In most cases, it is used in conjunction with a
Selection Pattern (Dir->Job)
to give you fine control over exactly what JobIds are selected. The possible values are:- SmallestVolume
This selection keyword selects the volume with the fewest bytes from the Pool to be migrated. The Pool to be migrated is the Pool defined in the Migration Job resource. The migration control job will then start and run one migration backup job for each of the Jobs found on this Volume. The Selection Pattern, if specified, is not used.
- OldestVolume
This selection keyword selects the volume with the oldest last write time in the Pool to be migrated. The Pool to be migrated is the Pool defined in the Migration Job resource. The migration control job will then start and run one migration backup job for each of the Jobs found on this Volume. The Selection Pattern, if specified, is not used.
- Client
The Client selection type, first selects all the Clients that have been backed up in the Pool specified by the Migration Job resource, then it applies the
Selection Pattern (Dir->Job)
as a regular expression to the list of Client names, giving a filtered Client name list. All jobs that were backed up for those filtered (regexed) Clients will be migrated. The migration control job will then start and run one migration backup job for each of the JobIds found for those filtered Clients.- Volume
The Volume selection type, first selects all the Volumes that have been backed up in the Pool specified by the Migration Job resource, then it applies the
Selection Pattern (Dir->Job)
as a regular expression to the list of Volume names, giving a filtered Volume list. All JobIds that were backed up for those filtered (regexed) Volumes will be migrated. The migration control job will then start and run one migration backup job for each of the JobIds found on those filtered Volumes.- Job
The Job selection type, first selects all the Jobs (as defined on the
Name (Dir->Job)
directive in a Job resource) that have been backed up in the Pool specified by the Migration Job resource, then it applies theSelection Pattern (Dir->Job)
as a regular expression to the list of Job names, giving a filtered Job name list. All JobIds that were run for those filtered (regexed) Job names will be migrated. Note, for a given Job named, they can be many jobs (JobIds) that ran. The migration control job will then start and run one migration backup job for each of the Jobs found.- SQLQuery
The SQLQuery selection type, used the
Selection Pattern (Dir->Job)
as an SQL query to obtain the JobIds to be migrated. The Selection Pattern must be a valid SELECT SQL statement for your SQL engine, and it must return the JobId as the first field of the SELECT.- PoolOccupancy
This selection type will cause the Migration job to compute the total size of the specified pool for all Media Types combined. If it exceeds the
Migration High Bytes (Dir->Pool)
defined in the Pool, the Migration job will migrate all JobIds beginning with the oldest Volume in the pool (determined by Last Write time) until the Pool bytes drop below theMigration Low Bytes (Dir->Pool)
defined in the Pool. This calculation should be consider rather approximative because it is made once by the Migration job before migration is begun, and thus does not take into account additional data written into the Pool during the migration. In addition, the calculation of the total Pool byte size is based on the Volume bytes saved in the Volume (Media) database entries. The bytes calculate for Migration is based on the value stored in the Job records of the Jobs to be migrated. These do not include the Storage daemon overhead as is in the total Pool size. As a consequence, normally, the migration will migrate more bytes than strictly necessary.- PoolTime
The PoolTime selection type will cause the Migration job to look at the time each JobId has been in the Pool since the job ended. All Jobs in the Pool longer than the time specified on
Migration Time (Dir->Pool)
directive in the Pool resource will be migrated.- PoolUncopiedJobs
This selection which copies all jobs from a pool to an other pool which were not copied before is available only for copy Jobs.
- Spool Attributes
- Type:
- Default value:
no
Is Spool Attributes is disabled, the File attributes are sent by the Storage daemon to the Director as they are stored on tape. However, if you want to avoid the possibility that database updates will slow down writing to the tape, you may want to set the value to yes, in which case the Storage daemon will buffer the File attributes and Storage coordinates to a temporary file in the Working Directory, then when writing the Job data to the tape is completed, the attributes and storage coordinates will be sent to the Director.
NOTE: When
Spool Data (Dir->Job)
is set to yes, Spool Attributes is also automatically set to yes.For details, see Data Spooling.
- Spool Data
- Type:
- Default value:
no
If this directive is set to yes, the Storage daemon will be requested to spool the data for this Job to disk rather than write it directly to the Volume (normally a tape).
Thus the data is written in large blocks to the Volume rather than small blocks. This directive is particularly useful when running multiple simultaneous backups to tape. Once all the data arrives or the spool files’ maximum sizes are reached, the data will be despooled and written to tape.
Spooling data prevents interleaving data from several job and reduces or eliminates tape drive stop and start commonly known as “shoe-shine”.
We don’t recommend using this option if you are writing to a disk file using this option will probably just slow down the backup jobs.
NOTE: When this directive is set to yes,
Spool Attributes (Dir->Job)
is also automatically set to yes.For details, see Data Spooling.
- Spool Size
- Type:
This specifies the maximum spool size for this job. The default is taken from
Maximum Spool Size (Sd->Device)
limit.
- Storage
- Type:
The Storage directive defines the name of the storage services where you want to backup the FileSet data. For additional details, see the Storage Resource of this manual. The Storage resource may also be specified in the Job’s Pool resource, in which case the value in the Pool resource overrides any value in the Job. This Storage resource definition is not required by either the Job resource or in the Pool, but it must be specified in one or the other, if not an error will result.
- Strip Prefix
- Type:
This directive applies only to a Restore job and specifies a prefix to remove from the directory name of all files being restored. This will use the File Relocation feature.
Using
Strip Prefix=/etc
,/etc/passwd
will be restored to/passwd
Under Windows, if you want to restore
c:/files
tod:/files
, you can use:
- Type
- Required:
True
- Type:
The Type directive specifies the Job type, which is one of the following:
- Backup
- Run a backup Job. Normally you will have at least one Backup job for each client you want to save. Normally, unless you turn off cataloging, most all the important statistics and data concerning files backed up will be placed in the catalog.
- Restore
- Run a restore Job. Normally, you will specify only one Restore job which acts as a sort of prototype that you will modify using the console program in order to perform restores. Although certain basic information from a Restore job is saved in the catalog, it is very minimal compared to the information stored for a Backup job – for example, no File database entries are generated since no Files are saved.
Restore jobs cannot be automatically started by the scheduler as is the case for Backup, Verify and Admin jobs. To restore files, you must use the restore command in the console.
- Verify
- Run a verify Job. In general, verify jobs permit you to compare the contents of the catalog to the file system, or to what was backed up. In addition, to verifying that a tape that was written can be read, you can also use verify as a sort of tripwire intrusion detection.
- Admin
- Run an admin Job. An Admin job can be used to periodically run catalog pruning, if you do not want to do it at the end of each Backup Job. Although an Admin job is recorded in the catalog, very little data is saved.
- Migrate
defines the job that is run as being a Migration Job. A Migration Job is a sort of control job and does not have any Files associated with it, and in that sense they are more or less like an Admin job. Migration jobs simply check to see if there is anything to Migrate then possibly start and control new Backup jobs to migrate the data from the specified Pool to another Pool. Note, any original JobId that is migrated will be marked as having been migrated, and the original JobId can nolonger be used for restores; all restores will be done from the new migrated Job.
- Copy
defines the job that is run as being a Copy Job. A Copy Job is a sort of control job and does not have any Files associated with it, and in that sense they are more or less like an Admin job. Copy jobs simply check to see if there is anything to Copy then possibly start and control new Backup jobs to copy the data from the specified Pool to another Pool. Note that when a copy is made, the original JobIds are left unchanged. The new copies can not be used for restoration unless you specifically choose them by JobId. If you subsequently delete a JobId that has a copy, the copy will be automatically upgraded to a Backup rather than a Copy, and it will subsequently be used for restoration.
- Consolidate
is used to consolidate Always Incremental Backups jobs, see Always Incremental Backup Scheme. It has been introduced in Bareos Version >= 16.2.4.
Within a particular Job Type, there are also Levels, see
Level (Dir->Job)
.
- Verify Job
- Type:
This directive is an alias.
If you run a verify job without this directive, the last job run will be compared with the catalog, which means that you must immediately follow a backup by a verify command. If you specify a Verify Job Bareos will find the last job with that name that ran. This permits you to run all your backups, then run Verify jobs on those that you wish to be verified (most often a VolumeToCatalog) so that the tape just written is re-read.
- Virtual Full Backup Pool
- Type:
The Virtual Full Backup Pool specifies a Pool to be used for Virtual Full backups. It will override any
Pool (Dir->Job)
specification during a Virtual Full backup.
- Where
- Type:
This directive applies only to a Restore job and specifies a prefix to the directory name of all files being restored. This permits files to be restored in a different location from which they were saved. If Where is not specified or is set to backslash (/), the files will be restored to their original location. By default, we have set Where in the example configuration files to be /tmp/bareos-restores. This is to prevent accidental overwriting of your files.
Warning
To use Where on NDMP backups, please read Restore files to different path
- Write Bootstrap
- Type:
The writebootstrap directive specifies a file name where Bareos will write a bootstrap file for each Backup job run. This directive applies only to Backup Jobs. If the Backup job is a Full save, Bareos will erase any current contents of the specified file before writing the bootstrap records. If the Job is an Incremental or Differential save, Bareos will append the current bootstrap record to the end of the file.
Using this feature, permits you to constantly have a bootstrap file that can recover the current state of your system. Normally, the file specified should be a mounted drive on another machine, so that if your hard disk is lost, you will immediately have a bootstrap record available. Alternatively, you should copy the bootstrap file to another machine after it is updated. Note, it is a good idea to write a separate bootstrap file for each Job backed up including the job that backs up your catalog database.
If the bootstrap-file-specification begins with a vertical bar (|), Bareos will use the specification as the name of a program to which it will pipe the bootstrap record. It could for example be a shell script that emails you the bootstrap record.
Before opening the file or executing the specified command, Bareos performs character substitution like in RunScript directive. To automatically manage your bootstrap files, you can use this in your JobDefs resources:
For more details on using this file, please see chapter The Bootstrap File.
The following is an example of a valid Job resource definition:
JobDefs Resource
The JobDefs resource permits all the same directives that can appear in a Job resource. However, a JobDefs resource does not create a Job, rather it can be referenced within a Job to provide defaults for that Job. This permits you to concisely define several nearly identical Jobs, each one referencing a JobDefs resource which contains the defaults. Only the changes from the defaults need to be mentioned in each Job.
Schedule Resource
The Schedule resource provides a means of automatically scheduling a Job as well as the ability to override the default Level, Pool, Storage and Messages resources. If a Schedule resource is not referenced in a Job, the Job can only be run manually. In general, you specify an action to be taken and when.
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
|||
= |
yes |
||
= |
required |
||
- Run
- Type:
The Run directive defines when a Job is to be run, and what overrides if any to apply. You may specify multiple run directives within a Schedule resource. If you do, they will all be applied (i.e. multiple schedules). If you have two Run directives that start at the same time, two Jobs will start at the same time (well, within one second of each other).
The Job-overrides permit overriding the Level, the Storage, the Messages, and the Pool specifications provided in the Job resource. In addition, the FullPool, the IncrementalPool, and the DifferentialPool specifications permit overriding the Pool specification according to what backup Job Level is in effect.
By the use of overrides, you may customize a particular Job. For example, you may specify a Messages override for your Incremental backups that outputs messages to a log file, but for your weekly or monthly Full backups, you may send the output by email by using a different Messages override.
Job-overrides are specified as: keyword=value where the keyword is Level, Storage, Messages, Pool, FullPool, DifferentialPool, or IncrementalPool, and the value is as defined on the respective directive formats for the Job resource. You may specify multiple Job-overrides on one Run directive by separating them with one or more spaces or by separating them with a trailing comma. For example:
- Level=Full
is all files in the FileSet whether or not they have changed.
- Level=Incremental
is all files that have changed since the last backup.
- Pool=Weekly
specifies to use the Pool named Weekly.
- Storage=DLT_Drive
specifies to use DLT_Drive for the storage device.
- Messages=Verbose
specifies to use the Verbose message resource for the Job.
- FullPool=Full
specifies to use the Pool named Full if the job is a full backup, or is upgraded from another type to a full backup.
- DifferentialPool=Differential
specifies to use the Pool named Differential if the job is a differential backup.
- IncrementalPool=Incremental
specifies to use the Pool named Incremental if the job is an incremental backup.
- Accurate=yes|no
tells Bareos to use or not the Accurate code for the specific job. It can allow you to save memory and and CPU resources on the catalog server in some cases.
- SpoolData=yes|no
tells Bareos to use or not to use spooling for the specific job.
Date-time-specification determines when the Job is to be run. The specification is a repetition, and as a default Bareos is set to run a job at the beginning of the hour of every hour of every day of every week of every month of every year. This is not normally what you want, so you must specify or limit when you want the job to run. Any specification given is assumed to be repetitive in nature and will serve to override or limit the default repetition. This is done by specifying masks or times for the hour, day of the month, day of the week, week of the month, week of the year, and month when you want the job to run. By specifying one or more of the above, you can define a schedule to repeat at almost any frequency you want.
Basically, you must supply a month, day, hour, and minute the Job is to be run. Of these four items to be specified, day is special in that you may either specify a day of the month such as 1, 2, … 31, or you may specify a day of the week such as Monday, Tuesday, … Sunday. Finally, you may also specify a week qualifier to restrict the schedule to the first, second, third, fourth, or fifth week of the month.
For example, if you specify only a day of the week, such as Tuesday the Job will be run every hour of every Tuesday of every Month. That is the month and hour remain set to the defaults of every month and all hours.
Note, by default with no other specification, your job will run at the beginning of every hour. If you wish your job to run more than once in any given hour, you will need to specify multiple run specifications each with a different minute.
The date/time to run the Job can be specified in the following way in pseudo-BNF:
<week-keyword> ::= 1st | 2nd | 3rd | 4th | 5th | first | second | third | fourth | fifth | last <wday-keyword> ::= sun | mon | tue | wed | thu | fri | sat | sunday | monday | tuesday | wednesday | thursday | friday | saturday <week-of-year-keyword> ::= w00 | w01 | ... w52 | w53 <month-keyword> ::= jan | feb | mar | apr | may | jun | jul | aug | sep | oct | nov | dec | january | february | ... | december <digit> ::= 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 0 <number> ::= <digit> | <digit><number> <12hour> ::= 0 | 1 | 2 | ... 12 <hour> ::= 0 | 1 | 2 | ... 23 <minute> ::= 0 | 1 | 2 | ... 59 <day> ::= 1 | 2 | ... 31 <time> ::= <hour>:<minute> | <12hour>:<minute>am | <12hour>:<minute>pm <time-spec> ::= at <time> | hourly <day-range> ::= <day>-<day> <month-range> ::= <month-keyword>-<month-keyword> <wday-range> ::= <wday-keyword>-<wday-keyword> <range> ::= <day-range> | <month-range> | <wday-range> <modulo> ::= <day>/<day> | <week-of-year-keyword>/<week-of-year-keyword> <date> ::= <date-keyword> | <day> | <range> <date-spec> ::= <date> | <date-spec> <day-spec> ::= <day> | <wday-keyword> | <day> | <wday-range> | <week-keyword> <wday-keyword> | <week-keyword> <wday-range> | daily <month-spec> ::= <month-keyword> | <month-range> | monthly <date-time-spec> ::= <month-spec> <day-spec> <time-spec>
Note, the Week of Year specification wnn follows the ISO standard definition of the week of the year, where Week 1 is the week in which the first Thursday of the year occurs, or alternatively, the week which contains the 4th of January. Weeks are numbered w01 to w53. w00 for Bareos is the week that precedes the first ISO week (i.e. has the first few days of the year if any occur before Thursday). w00 is not defined by the ISO specification. A week starts with Monday and ends with Sunday.
According to the NIST (US National Institute of Standards and Technology), 12am and 12pm are ambiguous and can be defined to anything. However, 12:01am is the same as 00:01 and 12:01pm is the same as 12:01, so Bareos defines 12am as 00:00 (midnight) and 12pm as 12:00 (noon). You can avoid this abiguity (confusion) by using 24 hour time specifications (i.e. no am/pm).
An example schedule resource that is named WeeklyCycle and runs a job with level full each Sunday at 2:05am and an incremental job Monday through Saturday at 2:05am is:
An example of a possible monthly cycle is as follows:
The first of every month:
The last friday of the month (i.e. the last friday in the last week of the month)
Every 10 minutes:
The modulo scheduler makes it easy to specify schedules like odd or even days/weeks, or more generally every n days or weeks. It is called modulo scheduler because it uses the modulo to determine if the schedule must be run or not. The second variable behind the slash lets you determine in which cycle of days/weeks a job should be run. The first part determines on which day/week the job should be run first. E.g. if you want to run a backup in a 5-week-cycle, starting on week 3, you set it up as w03/w05.
Technical Notes on Schedules
Internally Bareos keeps a schedule as a bit mask. There are six masks and a minute field to each schedule. The masks are hour, day of the month (mday), month, day of the week (wday), week of the month (wom), and week of the year (woy). The schedule is initialized to have the bits of each of these masks set, which means that at the beginning of every hour, the job will run. When you specify a month for the first time, the mask will be cleared and the bit corresponding to your selected month will be selected. If you specify a second month, the bit corresponding to it will also be added to the mask. Thus when Bareos checks the masks to see if the bits are set corresponding to the current time, your job will run only in the two months you have set. Likewise, if you set a time (hour), the hour mask will be cleared, and the hour you specify will be set in the bit mask and the minutes will be stored in the minute field.
For any schedule you have defined, you can see how these bits are set by doing a show schedules command in the Console program. Please note that the bit mask is zero based, and Sunday is the first day of the week (bit zero).
FileSet Resource
The FileSet resource defines what files are to be included or excluded in a backup job. A FileSet resource is required for each backup Job. It consists of a list of files or directories to be included, a list of files or directories to be excluded and the various backup options such as compression, encryption, and signatures that are to be applied to each file.
Any change to the list of the included files
will cause Bareos to automatically create a new FileSet
(defined by the name and an MD5 checksum of the Include/Exclude File directives contents).
Each time a new FileSet is created
Bareos will ensure that the next backup is always a full backup.
However, this does only apply to changes in directives
File (Dir->Fileset->Include)
and
File (Dir->Fileset->Exclude)
.
Changes in other directives or the FileSet Options Resource do not result
in upgrade to a full backup.
Use Ignore File Set Changes (Dir->Fileset)
to disable this behavior.
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
|||
= |
yes |
||
= |
no |
||
= |
required |
- Enable VSS
- Type:
- Default value:
yes
If this directive is set to yes the File daemon will be notified that the user wants to use a Volume Shadow Copy Service (VSS) backup for this job. This directive is effective only on the Windows File Daemon. It permits a consistent copy of open files to be made for cooperating writer applications, and for applications that are not VSS away, Bareos can at least copy open files. The Volume Shadow Copy will only be done on Windows drives where the drive (e.g. C:, D:, …) is explicitly mentioned in a File directive. For more information, please see the Windows chapter of this manual.
- Exclude
- Type:
Describe the files, that should get excluded from a backup, see section about the FileSet Exclude Resource.
- Ignore File Set Changes
- Type:
- Default value:
no
Normally, if you modify
File (Dir->Fileset->Include)
orFile (Dir->Fileset->Exclude)
of the FileSet Include or Exclude lists, the next backup will be forced to a full so that Bareos can guarantee that any additions or deletions are properly saved.We strongly recommend against setting this directive to yes, since doing so may cause you to have an incomplete set of backups.
If this directive is set to yes, any changes you make to the FileSet Include or Exclude lists, will not force a Full during subsequent backups.
- Include
- Type:
Describe the files, that should get included to a backup, see section about the FileSet Include Resource.
FileSet Include Resource
The Include resource must contain a list of directories and/or files to be processed in the backup job.
Normally, all files found in all subdirectories of any directory in the Include File list will be backed up. The Include resource may also contain one or more Options resources that specify options such as compression to be applied to all or any subset of the files found when processing the file-list for backup. Please see below for more details concerning Options resources.
There can be any number of Include resources within the FileSet, each having its own list of directories or files to be backed up and the backup options defined by one or more Options resources.
Please take note of the following items in the FileSet syntax:
There is no equal sign (=) after the Include and before the opening brace ({). The same is true for the Exclude.
Each directory (or filename) to be included or excluded is preceded by a File =. Previously they were simply listed on separate lines.
The Exclude resource does not accept Options.
When using wild-cards or regular expressions, directory names are always terminated with a slash (/) and filenames have no trailing slash.
- File
- Type:
“path”
- Type:
“<includefile-server”
- Type:
“\\<includefile-client”
- Type:
“|command-server”
- Type:
“\\|command-client”
The file list consists of one file or directory name per line. Directory names should be specified without a trailing slash with Unix path notation.
Note
Windows users, please take note to specify directories (even
c:/...
) in Unix path notation. If you use Windows conventions, you will most likely not be able to restore your files due to the fact that the Windows path separator was defined as an escape character long before Windows existed, and Bareos adheres to that convention (i.e. means the next character appears as itself).You should always specify a full path for every directory and file that you list in the FileSet. In addition, on Windows machines, you should always prefix the directory or filename with the drive specification (e.g.
c:/xxx
) using Unix directory name separators (forward slash). The drive letter itself can be upper or lower case (e.g.c:/xxx
orC:/xxx
).A file item may not contain wild-cards. Use directives in the FileSet Options Resource if you wish to specify wild-cards or regular expression matching.
Bareos’s default for processing directories is to recursively descend in the directory saving all files and subdirectories. Bareos will not by default cross filesystems (or mount points in Unix parlance). This means that if you specify the root partition (e.g.
/
), Bareos will save only the root partition and not any of the other mounted filesystems. Similarly on Windows systems, you must explicitly specify each of the drives you want saved (e.g.c:/
andd:/
…). In addition, at least for Windows systems, you will most likely want to enclose each specification within double quotes particularly if the directory (or file) name contains spaces.Take special care not to include a directory twice or Bareos will by default backup the same files two times wasting a lot of space on your archive device. Including a directory twice is very easy to do. For example:
on a Unix system where
/usr
is a subdirectory (rather than a mounted filesystem) will cause/usr
to be backed up twice. Using the directiveShadowing (Dir->Fileset->Include->Options)
Bareos can be configured to detect and exclude duplicates automatically.To include names containing spaces, enclose the name between double-quotes.
There are a number of special cases when specifying directories and files. They are:
@filename
Any name preceded by an at-sign (@) is assumed to be the name of a file, which contains a list of files each preceded by a “File =”. The named file is read once when the configuration file is parsed during the Director startup. Note, that the file is read on the Director’s machine and not on the Client’s. In fact, the @filename can appear anywhere within a configuration file where a token would be read, and the contents of the named file will be logically inserted in the place of the @filename. What must be in the file depends on the location the @filename is specified in the conf file. For example:
File = "<includefile-server"
Any file item preceded by a less-than sign (
<
) will be taken to be a file. This file will be read on the Director’s machine (see below for doing it on the Client machine) at the time the Job starts, and the data will be assumed to be a list of directories or files, one per line, to be included. The names should start in column 1 and should not be quoted even if they contain spaces. This feature allows you to modify the external file and change what will be saved without stopping and restarting Bareos as would be necessary if using the @ modifier noted above. For example:File = "\\<includefile-client"
If you precede the less-than sign (
<
) with two backslashes as in\\<
, the file-list will be read on the Client machine instead of on the Director’s machine.File = "|command-server"
Any name beginning with a vertical bar (|) is assumed to be the name of a program. This program will be executed on the Director’s machine at the time the Job starts (not when the Director reads the configuration file), and any output from that program will be assumed to be a list of files or directories, one per line, to be included. Before submitting the specified command Bareos will performe character substitution.
This allows you to have a job that, for example, includes all the local partitions even if you change the partitioning by adding a disk. The examples below show you how to do this. However, please note two things:
if you want the local filesystems, you probably should be using the
FS Type (Dir->Fileset->Include->Options)
directive and setOne FS (Dir->Fileset->Include->Options) = no
.the exact syntax of the command needed in the examples below is very system dependent. For example, on recent Linux systems, you may need to add the -P option, on FreeBSD systems, the options will be different as well.
In general, you will need to prefix your command or commands with a sh -c so that they are invoked by a shell. This will not be the case if you are invoking a script as in the second example below. Also, you must take care to escape (precede with a
\
) wild-cards, shell character, and to ensure that any spaces in your command are escaped as well. If you use a single quotes (’) within a double quote (“), Bareos will treat everything between the single quotes as one field so it will not be necessary to escape the spaces. In general, getting all the quotes and escapes correct is a real pain as you can see by the next example. As a consequence, it is often easier to put everything in a file and simply use the file name within Bareos. In that case the sh -c will not be necessary providing the first line of the file is #!/bin/sh.As an example:
will produce a list of all the local partitions on a Linux system. Quoting is a real problem because you must quote for Bareos which consists of preceding every \ and every ” with a \, and you must also quote for the shell command. In the end, it is probably easier just to execute a script file with:
where my_partitions has:
#!/bin/sh df -l | grep "^/dev/hd[ab]" | grep -v ".*/tmp" \ | awk "{print \$6}"
File = "\\|command-client"
If the vertical bar (
|
) in front of my_partitions is preceded by a two backslashes as in\\|
, the program will be executed on the Client’s machine instead of on the Director’s machine. An example, provided by John Donagher, that backs up all the local UFS partitions on a remote system is:The above requires two backslash characters after the double quote (one preserves the next one). If you are a Linux user, just change the ufs to ext3 (or your preferred filesystem type), and you will be in business.
If you know what filesystems you have mounted on your system, e.g. for Linux only using ext2, ext3 or ext4, you can backup all local filesystems using something like:
- Raw Partition
If you explicitly specify a block device such as
/dev/hda1
, then Bareos will assume that this is a raw partition to be backed up. In this case, you are strongly urged to specify a Sparse=yes include option, otherwise, you will save the whole partition rather than just the actual data that the partition contains. For example:will backup the data in device
/dev/hd6
. Note,/dev/hd6
must be the raw partition itself. Bareos will not back it up as a raw device if you specify a symbolic link to a raw device such as my be created by the LVM Snapshot utilities.
- Exclude Dir Containing
- Type:
filename
This directive can be added to the Include section of the FileSet resource. If the specified filename (filename-string) is found on the Client in any directory to be backed up, the whole directory will be ignored (not backed up). We recommend to use the filename
.nobackup
, as it is a hidden file on unix systems, and explains what is the purpose of the file.For example:
But in
/home
, there may be hundreds of directories of users and some people want to indicate that they don’t want to have certain directories backed up. For example, with the above FileSet, if the user or sysadmin creates a file named .nobackup in specific directories, such as/home/user/www/cache/.nobackup /home/user/temp/.nobackup
then Bareos will not backup the two directories named:
/home/user/www/cache /home/user/temp
Subdirectories will not be backed up. That is, the directive applies to the two directories in question and any children (be they files, directories, etc).
- Plugin
- Type:
“plugin-name”
“:plugin-parameter1”
“:plugin-parameter2”
“:…”
Instead of only specifying files, a file set can also use plugins. Plugins are additional libraries that handle specific requirements. The purpose of plugins is to provide an interface to any system program for backup and restore. That allows you, for example, to do database backups without a local dump.
The syntax and semantics of the Plugin directive require the first part of the string up to the colon to be the name of the plugin. Everything after the first colon is ignored by the File daemon but is passed to the plugin. Thus the plugin writer may define the meaning of the rest of the string as he wishes.
Since Version >= 20 the plugin string can be spread over multiple lines using quotes as shown above.
For more information, see File Daemon Plugins.
It is also possible to define more than one plugin directive in a FileSet to do several database dumps at once.
- Options
See the FileSet Options Resource section.
FileSet Exclude Resource
FileSet Exclude-Resources very similar to Include-Resources, except that they only allow following directives:
- File
- Type:
“path”
- Type:
“<includefile-server”
- Type:
“\\<includefile-client”
- Type:
“|command-server”
- Type:
“\\|command-client”
Files to exclude are descripted in the same way as at the FileSet Include Resource.
For example:
Another way to exclude files and directories is to use the
Exclude (Dir->Fileset->Include->Options) = yes
setting in a Include section.
FileSet Options Resource
The Options resource is optional, but when specified, it will contain a list of keyword=value options to be applied to the file-list. See below for the definition of file-list. Multiple Options resources may be specified one after another. As the files are found in the specified directories, the Options will applied to the filenames to determine if and how the file should be backed up. The wildcard and regular expression pattern matching parts of the Options resources are checked in the order they are specified in the FileSet until the first one that matches. Once one matches, the compression and other flags within the Options specification will apply to the pattern matched.
A key point is that in the absence of an Option or no other Option is matched, every file is accepted for backing up. This means that if you want to exclude something, you must explicitly specify an Option with an exclude = yes and some pattern matching.
Once Bareos determines that the Options resource matches the file under consideration, that file will be saved without looking at any other Options resources that may be present. This means that any wild cards must appear before an Options resource without wild cards.
If for some reason, Bareos checks all the Options resources to a file under consideration for backup, but there are no matches (generally because of wild cards that don’t match), Bareos as a default will then backup the file. This is quite logical if you consider the case of no Options clause is specified, where you want everything to be backed up, and it is important to keep in mind when excluding as mentioned above.
However, one additional point is that in the case that no match was found, Bareos will use the options found in the last Options resource. As a consequence, if you want a particular set of “default” options, you should put them in an Options resource after any other Options.
It is a good idea to put all your wild-card and regex expressions inside double quotes to prevent conf file scanning problems.
This is perhaps a bit overwhelming, so there are a number of examples included below to illustrate how this works.
You find yourself using a lot of Regex statements, which will cost quite a lot of CPU time, we recommend you simplify them if you can, or better yet convert them to Wild statements which are much more efficient.
The directives within an Options resource may be one of the following:
- Auto Exclude
- Type:
BOOLEAN
- Default value:
yes
Automatically exclude files not intended for backup. Currently only used for Windows, to exclude files defined in the registry key
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\BackupRestore\FilesNotToBackup
, see section FilesNotToBackup Registry Key.Since Version >= 14.2.2.
- Compression
- Type:
<GZIP|GZIP1|…|GZIP9|LZO|LZFAST|LZ4|LZ4HC>
Configures the software compression to be used by the File Daemon. The compression is done on a file by file basis.
Software compression gets important if you are writing to a device that does not support compression by itself (e.g. hard disks). Otherwise, all modern tape drive do support hardware compression.
Software compression can also be helpful to reduce the required network bandwidth, as compression is done on the File Daemon. In most cases, LZ4 is the best choice, because it is relatively fast. If the compression rate of LZ4 isn’t good enough, you might consider LZ4HC. However, using Bareos software compression and device hardware compression together is not advised, as trying to compress precompressed data is a very CPU-intense task and probably end up in even larger data.
You can overwrite this option per Storage resource using the
Allow Compression (Dir->Storage) = no
option.- GZIP
All files saved will be software compressed using the GNU ZIP compression format.
Specifying GZIP uses the default compression level 6 (i.e. GZIP is identical to GZIP6). If you want a different compression level (1 through 9), you can specify it by appending the level number with no intervening spaces to GZIP. Thus compression=GZIP1 would give minimum compression but the fastest algorithm, and compression=GZIP9 would give the highest level of compression, but requires more computation. According to the GZIP documentation, compression levels greater than six generally give very little extra compression and are rather CPU intensive.
- LZFAST
Deprecated since version 19.2.
All files saved will be software compressed using the LZFAST compression format.
LZFAST provides much faster compression and decompression speed but lower compression ratio than GZIP. If your CPU is fast enough you should be able to compress your data without making the backup duration longer.
Warning
This is a nonstandard compression algorithm and support for compressing backups using it may be removed in a future version. Please consider using one of the other algorithms instead.
- LZO
All files saved will be software compressed using the LZO compression format.
LZO provides much faster compression and decompression speed but lower compression ratio than GZIP. If your CPU is fast enough you should be able to compress your data without making the backup duration longer.
Note that Bareos only use one compression level LZO1X-1 specified by LZO.
- LZ4
All files saved will be software compressed using the LZ4 compression format.
LZ4 provides much faster compression and decompression speed but lower compression ratio than GZIP. If your CPU is fast enough you should be able to compress your data without making the backup duration longer.
Both LZ4 and LZ4HC have the same decompression speed which is about twice the speed of the LZO compression. So for a restore both LZ4 and LZ4HC are good candidates.
- LZ4HC
All files saved will be software compressed using the LZ4HC compression format.
LZ4HC is the High Compression version of the LZ4 compression. It has a higher compression ratio than LZ4 and is more comparable to GZIP-6 in both compression rate and CPU usage.
Both LZ4 and LZ4HC have the same decompression speed which is about twice the speed of the LZO compression. So for a restore both LZ4 and LZ4HC are good candidates.
- Signature
- Type:
<MD5|SHA1|SHA256|SHA512|XXH128>
It is strongly recommend to use signatures for your backups. Note, only one type of signature can be computed per file.
You have to find the right balance between speed and security. Todays CPUs have often special instructions that can calculate checksums very fast. So if in doubt, testing the speed of the different signatures in your environment will show what is the fastest algorithm. The XXH128 algorithm is not cryptographically safe, but it is suitable for non-cryptographic purposes (like calculating a checksum to avoid avoid data corruption as used by Bareos here). Bareos suggests XXH128 as the preferred algorithm due to the fact that it has by magnitudes lower computational requirements. The calculation of the cryptographical checksum like MD5 or SHA has proven to be the bottleneck in environments with high-speed requirements.
- MD5
An MD5 signature (128 bits) will be computed for each file saved. Adding this option generates about 5% extra overhead for each file saved. In addition to the additional CPU time, the MD5 signature adds 16 more bytes per file to your catalog.
- SHA1
An SHA1 (160 bits) signature will be computed for each file saved. The SHA1 algorithm is purported to be some what slower than the MD5 algorithm, but at the same time is significantly better from a cryptographic point of view (i.e. much fewer collisions). The SHA1 signature requires adds 20 bytes per file to your catalog.
- SHA256
An SHA256 signature (256 bits) will be computed for each file saved. The SHA256 algorithm is purported to be slower than the SHA1 algorithm, but at the same time is significantly better from a cryptographic point of view (i.e. no collisions found). The SHA256 signature requires 32 bytes per file in the catalog.
- SHA512
An SHA512 signature (512 bits) will be computed for each file saved. This is the slowest algorithm and is equivalent in terms of cryptographic value than SHA256. The SHA512 signature requires 64 bytes per file in the catalog.
- XXH128
An xxHash signature (XXH3, 128 bits) will be computed for each file saved. This is the algorithm with the least computational requirements, but it is also not cryptographically safe. The XXH128 signature requires 16 bytes per file in the catalog.
- Base Job
- Type:
<options>
The options letters specified are used when running a Backup Level=Full with BaseJobs. The options letters are the same than in the verify= option below.
- Accurate
- Type:
<options>
The options letters specified are used when running a Backup Level=Incremental/Differential in Accurate mode. The options letters are the same than in the verify= option below. The default setting is mcs which means that modification time, change time and size are compared.
- Verify
- Type:
<options>
The options letters specified are used when running a Verify Level=Catalog as well as the DiskToCatalog level job. The options letters may be any combination of the following:
- i
compare the inodes
- p
compare the permission bits
- n
compare the number of links
- u
compare the user id
- g
compare the group id
- s
compare the size
- a
compare the access time
- m
compare the modification time (st_mtime)
- c
compare the change time (st_ctime)
- d
report file size decreases
- 5
compare the MD5 signature
- 1
compare the SHA1 signature
- A
Only for Accurate option, it allows to always backup the file
A useful set of general options on the Level=Catalog or Level=DiskToCatalog verify is pins5 i.e. compare permission bits, inodes, number of links, size, and MD5 changes.
- One FS
- Type:
yes|no
- Default value:
yes
If set to yes, Bareos will remain on a single file system. That is it will not backup file systems that are mounted on a subdirectory. If you are using a Unix system, you may not even be aware that there are several different filesystems as they are often automatically mounted by the OS (e.g.
/dev
,/net
,/sys
,/proc
, …). Bareos will inform you when it decides not to traverse into another filesystem. This can be very useful if you forgot to backup a particular partition. An example of the informational message in the job report is:If you wish to backup multiple filesystems, you can explicitly list each filesystem you want saved. Otherwise, if you set the onefs option to no, Bareos will backup all mounted file systems (i.e. traverse mount points) that are found within the FileSet. Thus if you have NFS or Samba file systems mounted on a directory listed in your FileSet, they will also be backed up. Normally, it is preferable to set
One FS (Dir->Fileset->Include->Options) = yes
and to explicitly name each filesystem you want backed up. Explicitly naming the filesystems you want backed up avoids the possibility of getting into a infinite loop recursing filesystems. Another possibility is to useOne FS (Dir->Fileset->Include->Options) = no
and to setFS Type (Dir->Fileset->Include->Options) = ext2, ...
. See the example below for more details.If you think that Bareos should be backing up a particular directory and it is not, and you have onefs=yes set, before you complain, please do:
stat / stat <filesystem>
where you replace filesystem with the one in question. If the Device: number is different for / and for your filesystem, then they are on different filesystems. E.g.
root@host:~# stat / File: `/' Size: 4096 Blocks: 16 IO Block: 4096 directory Device: 302h/770d Inode: 2 Links: 26 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2005-11-10 12:28:01.000000000 +0100 Modify: 2005-09-27 17:52:32.000000000 +0200 Change: 2005-09-27 17:52:32.000000000 +0200 root@host:~# stat /net File: `/home' Size: 4096 Blocks: 16 IO Block: 4096 directory Device: 308h/776d Inode: 2 Links: 7 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2005-11-10 12:28:02.000000000 +0100 Modify: 2005-11-06 12:36:48.000000000 +0100 Change: 2005-11-06 12:36:48.000000000 +0100
Also be aware that even if you include
/home
in your list of files to backup, as you most likely should, you will get the informational message that “/home is a different filesystem” when Bareos is processing the/
directory. This message does not indicate an error. This message means that while examining the File = referred to in the second part of the message, Bareos will not descend into the directory mentioned in the first part of the message. However, it is possible that the separate filesystem will be backed up despite the message. For example, consider the following FileSet:where
/var
is a separate filesystem. In this example, you will get a message saying that Bareos will not decend from/
into/var
. But it is important to realise that Bareos will descend into/var
from the second File directive shown above. In effect, the warning is bogus, but it is supplied to alert you to possible omissions from your FileSet. In this example,/var
will be backed up. If you changed the FileSet such that it did not specify/var
, then/var
will not be backed up.
- Honor No Dump Flag
- Type:
yes|no
If your file system supports the nodump flag (e. g. most BSD-derived systems) Bareos will honor the setting of the flag when this option is set to yes. Files having this flag set will not be included in the backup and will not show up in the catalog. For directories with the nodump flag set recursion is turned off and the directory will be listed in the catalog. If the honor nodump flag option is not defined or set to no every file and directory will be eligible for backup.
- Portable
- Type:
yes|no
If set to yes (default is no), the Bareos File daemon will backup Win32 files in a portable format, but not all Win32 file attributes will be saved and restored. By default, this option is set to no, which means that on Win32 systems, the data will be backed up using Windows API calls and on WinNT/2K/XP, all the security and ownership attributes will be properly backed up (and restored). However this format is not portable to other systems – e.g. Unix, Win95/98/Me. When backing up Unix systems, this option is ignored, and unless you have a specific need to have portable backups, we recommend accept the default (no) so that the maximum information concerning your files is saved.
- Recurse
- Type:
yes|no
If set to yes (the default), Bareos will recurse (or descend) into all subdirectories found unless the directory is explicitly excluded using an exclude definition. If you set recurse=no, Bareos will save the subdirectory entries, but not descend into the subdirectories, and thus will not save the files or directories contained in the subdirectories. Normally, you will want the default (yes).
- Sparse
- Type:
yes|no
Enable special code that checks for sparse files such as created by ndbm. The default is no, so no checks are made for sparse files. You may specify sparse=yes even on files that are not sparse file. No harm will be done, but there will be a small additional overhead to check for buffers of all zero, and if there is a 32K block of all zeros (see below), that block will become a hole in the file, which may not be desirable if the original file was not a sparse file.
Restrictions: Bareos reads files in 32K buffers. If the whole buffer is zero, it will be treated as a sparse block and not written to tape. However, if any part of the buffer is non-zero, the whole buffer will be written to tape, possibly including some disk sectors (generally 4098 bytes) that are all zero. As a consequence, Bareos’s detection of sparse blocks is in 32K increments rather than the system block size. If anyone considers this to be a real problem, please send in a request for change with the reason.
If you are not familiar with sparse files, an example is say a file where you wrote 512 bytes at address zero, then 512 bytes at address 1 million. The operating system will allocate only two blocks, and the empty space or hole will have nothing allocated. However, when you read the sparse file and read the addresses where nothing was written, the OS will return all zeros as if the space were allocated, and if you backup such a file, a lot of space will be used to write zeros to the volume. Worse yet, when you restore the file, all the previously empty space will now be allocated using much more disk space. By turning on the sparse option, Bareos will specifically look for empty space in the file, and any empty space will not be written to the Volume, nor will it be restored. The price to pay for this is that Bareos must search each block it reads before writing it. On a slow system, this may be important. If you suspect you have sparse files, you should benchmark the difference or set sparse for only those files that are really sparse.
You probably should not use this option on files or raw disk devices that are not really sparse files (i.e. have holes in them).
- Read Fifo
- Type:
yes|no
If enabled, tells the Client to read the data on a backup and write the data on a restore to any FIFO (pipe) that is explicitly mentioned in the FileSet. In this case, you must have a program already running that writes into the FIFO for a backup or reads from the FIFO on a restore. This can be accomplished with the RunBeforeJob directive. If this is not the case, Bareos will hang indefinitely on reading/writing the FIFO. When this is not enabled (default), the Client simply saves the directory entry for the FIFO.
Normally, when Bareos runs a RunBeforeJob, it waits until that script terminates, and if the script accesses the FIFO to write into it, the Bareos job will block and everything will stall. However, Vladimir Stavrinov as supplied tip that allows this feature to work correctly. He simply adds the following to the beginning of the RunBeforeJob script:
exec > /dev/null
This feature can be used to do a “hot” database backup. You can use the RunBeforeJob to create the fifo and to start a program that dynamically reads your database and writes it to the fifo. Bareos will then write it to the Volume.
During the restore operation, the inverse is true, after Bareos creates the fifo if there was any data stored with it (no need to explicitly list it or add any options), that data will be written back to the fifo. As a consequence, if any such FIFOs exist in the fileset to be restored, you must ensure that there is a reader program or Bareos will block, and after one minute, Bareos will time out the write to the fifo and move on to the next file.
If you are planing to use a Fifo for backup, better take a look to the bpipe Plugin section.
- No Atime
- Type:
yes|no
If enabled, and if your Operating System supports the O_NOATIME file open flag, Bareos will open all files to be backed up with this option. It makes it possible to read a file without updating the inode atime (and also without the inode ctime update which happens if you try to set the atime back to its previous value). It also prevents a race condition when two programs are reading the same file, but only one does not want to change the atime. It’s most useful for backup programs and file integrity checkers (and Bareos can fit on both categories).
This option is particularly useful for sites where users are sensitive to their MailBox file access time. It replaces both the keepatime option without the inconveniences of that option (see below).
If your Operating System does not support this option, it will be silently ignored by Bareos.
- Mtime Only
- Type:
yes|no
- Default value:
no
If enabled, tells the Client that the selection of files during Incremental and Differential backups should based only on the st_mtime value in the stat() packet. The default is no which means that the selection of files to be backed up will be based on both the st_mtime and the st_ctime values. In general, it is not recommended to use this option.
- Keep Atime
- Type:
yes|no
The default is no. When enabled, Bareos will reset the st_atime (access time) field of files that it backs up to their value prior to the backup. This option is not generally recommended as there are very few programs that use st_atime, and the backup overhead is increased because of the additional system call necessary to reset the times. However, for some files, such as mailboxes, when Bareos backs up the file, the user will notice that someone (Bareos) has accessed the file. In this, case keepatime can be useful. (I’m not sure this works on Win32).
Note, if you use this feature, when Bareos resets the access time, the change time (st_ctime) will automatically be modified by the system, so on the next incremental job, the file will be backed up even if it has not changed. As a consequence, you will probably also want to use mtimeonly = yes as well as keepatime (thanks to Rudolf Cejka for this tip).
- Check File Changes
- Type:
yes|no
- Default value:
no
If enabled, the Client will check size, age of each file after their backup to see if they have changed during backup. If time or size mismatch, an error will raise.
zog-fd: Client1.2007-03-31_09.46.21 Error: /tmp/test mtime changed during backup.
Note
This option is intended to be used
File (Dir->Fileset->Include)
resources. Using it withPlugin (Dir->Fileset->Include)
filesets will generate warnings during backup.
- Hard Links
- Type:
yes|no
- Default value:
no
Warning
Since Version >= 23.0.0 the default is no.
When disabled, Bareos will backup each file individually and restore them as unrelated files as well. The fact that the files were hard links will be lost.
When enabled, this directive will cause hard links to be backed up as hard links. For each set of hard links, the file daemon will only backup the file contents once – when it encounters the first file of that set – and only backup meta data and a reference to that first file for each subsequent file in that set.
Be aware that the process of keeping track of the hard links can be quite expensive if you have lots of them (tens of thousands or more). Backups become very long and the File daemon will consume a lot of CPU power checking hard links.
See related performance option like
Optimize For Size (Dir->Director)
Note
If you created backups with
Hard Links (Dir->Fileset->Include->Options) = yes
you should only ever restore all files in that set of hard links at once or not restore any of them. If you were to restore a file inside that set, which was not the file with the contents attached, then Bareos will not restore its data, but instead just try to link with the file it references and restore its meta data. This means that the newly restored file might not actually have the same contents as when it was backed up.
- Wild
- Type:
<string>
Specifies a wild-card string to be applied to the filenames and directory names. Note, if Exclude is not enabled, the wild-card will select which files are to be included. If Exclude=yes is specified, the wild-card will select which files are to be excluded. Multiple wild-card directives may be specified, and they will be applied in turn until the first one that matches. Note, if you exclude a directory, no files or directories below it will be matched.
It is recommended to enclose the string in double quotes.
You may want to test your expressions prior to running your backup by using the bwild program. You can also test your full FileSet definition by using the estimate command.
An example of excluding with the WildFile option is presented at FileSet Examples
- Wild Dir
- Type:
<string>
Specifies a wild-card string to be applied to directory names only. No filenames will be matched by this directive. Note, if Exclude is not enabled, the wild-card will select directories to be included. If Exclude=yes is specified, the wild-card will select which directories are to be excluded. Multiple wild-card directives may be specified, and they will be applied in turn until the first one that matches. Note, if you exclude a directory, no files or directories below it will be matched.
It is recommended to enclose the string in double quotes.
You may want to test your expressions prior to running your backup by using the bwild program. You can also test your full FileSet definition by using the estimate command.
An example of excluding with the WildFile option is presented at FileSet Examples
- Wild File
- Type:
<string>
Specifies a wild-card string to be applied to non-directories. That is no directory entries will be matched by this directive. However, note that the match is done against the full path and filename, so your wild-card string must take into account that filenames are preceded by the full path. If
Exclude (Dir->Fileset->Include->Options)
is not enabled, the wild-card will select which files are to be included. IfExclude (Dir->Fileset->Include->Options) = yes
is specified, the wild-card will select which files are to be excluded. Multiple wild-card directives may be specified, and they will be applied in turn until the first one that matches.It is recommended to enclose the string in double quotes.
You may want to test your expressions prior to running your backup by using the bwild program. You can also test your full FileSet definition by using the estimate command.
An example of excluding with the WildFile option is presented at FileSet Examples
- Regex
- Type:
<string>
Specifies a POSIX extended regular expression to be applied to the filenames and directory names, which include the full path. If :strong:` Exclude` is not enabled, the regex will select which files are to be included. If
Exclude (Dir->Fileset->Include->Options) = yes
is specified, the regex will select which files are to be excluded. Multiple regex directives may be specified within an Options resource, and they will be applied in turn until the first one that matches. Note, if you exclude a directory, no files or directories below it will be matched.It is recommended to enclose the string in double quotes.
The regex libraries differ from one operating system to another, and in addition, regular expressions are complicated, so you may want to test your expressions prior to running your backup by using the bregex program. You can also test your full FileSet definition by using the estimate command.
You find yourself using a lot of Regex statements, which will cost quite a lot of CPU time, we recommend you simplify them if you can, or better yet convert them to Wild statements which are much more efficient.
- Regex File
- Type:
<string>
Specifies a POSIX extended regular expression to be applied to non-directories. No directories will be matched by this directive. However, note that the match is done against the full path and filename, so your regex string must take into account that filenames are preceded by the full path. If Exclude is not enabled, the regex will select which files are to be included. If Exclude=yes is specified, the regex will select which files are to be excluded. Multiple regex directives may be specified, and they will be applied in turn until the first one that matches.
It is recommended to enclose the string in double quotes.
The regex libraries differ from one operating system to another, and in addition, regular expressions are complicated, so you may want to test your expressions prior to running your backup by using the bregex program.
- Regex Dir
- Type:
<string>
Specifies a POSIX extended regular expression to be applied to directory names only. No filenames will be matched by this directive. Note, if Exclude is not enabled, the regex will select directories files are to be included. If Exclude=yes is specified, the regex will select which files are to be excluded. Multiple regex directives may be specified, and they will be applied in turn until the first one that matches. Note, if you exclude a directory, no files or directories below it will be matched.
It is recommended to enclose the string in double quotes.
The regex libraries differ from one operating system to another, and in addition, regular expressions are complicated, so you may want to test your expressions prior to running your backup by using the bregex program.
- Exclude
- Type:
BOOLEAN
When enabled, any files matched within the Options will be excluded from the backup.
- ACL Support
- Type:
yes|no
- Default value:
yes
Since Version >= 18.2.4 the default is yes. If this option is set to yes, and you have the POSIX libacl installed on your Linux system, Bareos will backup the file and directory Unix Access Control Lists (ACL) as defined in IEEE Std 1003.1e draft 17 and “POSIX.1e” (abandoned). This feature is available on Unix systems only and requires the Linux ACL library. Bareos is automatically compiled with ACL support if the libacl library is installed on your Linux system (shown in config.out). While restoring the files Bareos will try to restore the ACLs, if there is no ACL support available on the system, Bareos restores the files and directories but not the ACL information. Please note, if you backup an EXT3 or XFS filesystem with ACLs, then you restore them to a different filesystem (perhaps reiserfs) that does not have ACLs, the ACLs will be ignored.
For other operating systems there is support for either POSIX ACLs or the more extensible NFSv4 ACLs.
The ACL stream format between Operation Systems is not compatible so for example an ACL saved on Linux cannot be restored on Solaris.
The following Operating Systems are currently supported:
AIX (pre-5.3 (POSIX) and post 5.3 (POSIX and NFSv4) ACLs)
Darwin
FreeBSD (POSIX and NFSv4/ZFS ACLs)
HPUX
IRIX
Linux
Solaris (POSIX and NFSv4/ZFS ACLs)
Tru64
- XAttr Support
- Type:
yes|no
- Default value:
yes
Since Version >= 18.2.4 the default is yes. If this option is set to yes, and your operating system support either so called Extended Attributes or Extensible Attributes Bareos will backup the file and directory XATTR data. This feature is available on UNIX only and depends on support of some specific library calls in libc.
The XATTR stream format between Operating Systems is not compatible so an XATTR saved on Linux cannot for example be restored on Solaris.
On some operating systems ACLs are also stored as Extended Attributes (Linux, Darwin, FreeBSD) Bareos checks if you have the aclsupport option enabled and if so will not save the same info when saving extended attribute information. Thus ACLs are only saved once.
The following Operating Systems are currently supported:
AIX (Extended Attributes)
Darwin (Extended Attributes)
FreeBSD (Extended Attributes)
IRIX (Extended Attributes)
Linux (Extended Attributes)
NetBSD (Extended Attributes)
Solaris (Extended Attributes and Extensible Attributes)
Tru64 (Extended Attributes)
- Ignore Case
- Type:
yes|no
The default is no. On Windows systems, you will almost surely want to set this to yes. When this directive is set to yes all the case of character will be ignored in wild-card and regex comparisons. That is an uppercase A will match a lowercase a.
- FS Type
- Type:
filesystem-type
This option allows you to select files and directories by the filesystem type. Example filesystem-type names are:
btrfs, ext2, ext3, ext4, jfs, ntfs, proc, reiserfs, xfs, nfs, vfat, usbdevfs, sysfs, smbfs, iso9660.
You may have multiple Fstype directives, and thus permit matching of multiple filesystem types within a single Options resource. If the type specified on the fstype directive does not match the filesystem for a particular directive, that directory will not be backed up. This directive can be used to prevent backing up non-local filesystems. Normally, when you use this directive, you would also set
One FS (Dir->Fileset->Include->Options) = no
so that Bareos will traverse filesystems.This option is not implemented in Win32 systems.
- Drive Type
- Type:
Windows-drive-type
This option is effective only on Windows machines and is somewhat similar to the Unix/Linux
FS Type (Dir->Fileset->Include->Options)
described above, except that it allows you to select what Windows drive types you want to allow. By default all drive types are accepted.The permitted drivetype names are:
removable, fixed, remote, cdrom, ramdisk
You may have multiple Driveype directives, and thus permit matching of multiple drive types within a single Options resource. If the type specified on the drivetype directive does not match the filesystem for a particular directive, that directory will not be backed up. This directive can be used to prevent backing up non-local filesystems. Normally, when you use this directive, you would also set
One FS (Dir->Fileset->Include->Options) = no
so that Bareos will traverse filesystems.This option is not implemented in Unix/Linux systems.
- Hfs Plus Support
- Type:
yes|no
This option allows you to turn on support for Mac OSX HFS plus finder information.
- Strip Path
- Type:
<integer>
This option will cause integer paths to be stripped from the front of the full path/filename being backed up. This can be useful if you are migrating data from another vendor or if you have taken a snapshot into some subdirectory. This directive can cause your filenames to be overlayed with regular backup data, so should be used only by experts and with great care.
- Size
- Type:
sizeoption
This option will allow you to select files by their actual size. You can select either files smaller than a certain size or bigger then a certain size, files of a size in a certain range or files of a size which is within 1 % of its actual size.
The following settings can be used:
- <size>-<size>
Select file in range size - size.
- <size
Select files smaller than size.
- >size
Select files bigger than size.
- size
Select files which are within 1 % of size.
- Shadowing
- Type:
none|localwarn|localremove|globalwarn|globalremove
- Default value:
none
This option performs a check within the fileset for any file-list entries which are shadowing each other. Lets say you specify / and /usr but /usr is not a separate filesystem. Then in the normal situation both / and /usr would lead to data being backed up twice.
The following settings can be used:
- none
Do NO shadowing check
- localwarn
Do shadowing check within one include block and warn
- localremove
Do shadowing check within one include block and remove duplicates
- globalwarn
Do shadowing check between all include blocks and warn
- globalremove
Do shadowing check between all include blocks and remove duplicates
The local and global part of the setting relate to the fact if the check should be performed only within one include block (local) or between multiple include blocks of the same fileset (global). The warn and remove part of the keyword sets the action e.g. warn the user about shadowing or remove the entry shadowing the other.
Example for a fileset resource with fileset shadow warning enabled:
- Meta
- Type:
tag
This option will add a meta tag to a fileset. These meta tags are used by the Native NDMP protocol to pass NDMP backup or restore environment variables via the Data Management Agent (DMA) in Bareos to the remote NDMP Data Agent. You can have zero or more metatags which are all passed to the remote NDMP Data Agent.
FileSet Examples
The following is an example of a valid FileSet resource definition. Note, the first Include pulls in the contents of the file /etc/backup.list
when Bareos is started (i.e. the @), and that file must have each filename to be backed up preceded by a File = and on a separate line.
In the above example, all the files contained in /etc/backup.list
will be compressed with LZ4 compression, an XXH128 signature will be computed on the file’s contents (its data), and sparse file handling will apply.
The two directories /root/myfile
and /usr/lib/another_file
will also be saved without any options, but all files in those directories with the extensions .o
and .exe
will be excluded.
Let’s say that you now want to exclude the directory /tmp
. The simplest way to do so is to add an exclude directive that lists /tmp
. The example above would then become:
You can add wild-cards to the File directives listed in the Exclude directory, but you need to take care because if you exclude a directory, it and all files and directories below it will also be excluded.
Now lets take a slight variation on the above and suppose you want to save all your whole filesystem except /tmp
. The problem that comes up is that Bareos will not normally cross from one filesystem to another. Doing a df command, you get the following output:
user@host:~$ df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hda5 5044156 439232 4348692 10% /
/dev/hda1 62193 4935 54047 9% /boot
/dev/hda9 20161172 5524660 13612372 29% /home
/dev/hda2 62217 6843 52161 12% /rescue
/dev/hda8 5044156 42548 4745376 1% /tmp
/dev/hda6 5044156 2613132 2174792 55% /usr
none 127708 0 127708 0% /dev/shm
//minimatou/c$ 14099200 9895424 4203776 71% /mnt/mmatou
lmatou:/ 1554264 215884 1258056 15% /mnt/matou
lmatou:/home 2478140 1589952 760072 68% /mnt/matou/home
lmatou:/usr 1981000 1199960 678628 64% /mnt/matou/usr
lpmatou:/ 995116 484112 459596 52% /mnt/pmatou
lpmatou:/home 19222656 2787880 15458228 16% /mnt/pmatou/home
lpmatou:/usr 2478140 2038764 311260 87% /mnt/pmatou/usr
deuter:/ 4806936 97684 4465064 3% /mnt/deuter
deuter:/home 4806904 280100 4282620 7% /mnt/deuter/home
deuter:/files 44133352 27652876 14238608 67% /mnt/deuter/files
And we see that there are a number of separate filesystems (/ /boot /home /rescue /tmp and /usr not to mention mounted systems). If you specify only / in your Include list, Bareos will only save the Filesystem /dev/hda5. To save all filesystems except /tmp with out including any of the Samba or NFS mounted systems, and explicitly excluding a /tmp, /proc, .journal, and .autofsck, which you will not want to be saved and restored, you can use the following:
Since /tmp
is on its own filesystem and it was not explicitly named in the Include list, it is not really needed in the exclude list. It is better to list it in the Exclude list for clarity, and in case the disks are changed so that it is no longer in its own partition.
Now, lets assume you only want to backup .Z and .gz files and nothing else. This is a bit trickier because Bareos by default will select everything to backup, so we must exclude everything but .Z and .gz files. If we take the first example above and make the obvious modifications to it, we might come up with a FileSet that looks like this:
The *.Z and *.gz files will indeed be backed up, but all other files that are not matched by the Options directives will automatically be backed up too (i.e. that is the default rule).
To accomplish what we want, we must explicitly exclude all other files. We do this with the following:
The “trick” here was to add a RegexFile expression that matches all files. It does not match directory names, so all directories in /myfile will be backed up (the directory entry) and any *.Z and *.gz files contained in them. If you know that certain directories do not contain any *.Z or *.gz files and you do not want the directory entries backed up, you will need to explicitly exclude those directories. Backing up a directory entries is not very expensive.
Bareos uses the system regex library and some of them are different on different OSes. This can be tested by using the estimate job=job-name listing command in the console and adapting the RegexFile expression appropriately.
Please be aware that allowing Bareos to traverse or change file systems can be very dangerous. For example, with the following:
you will be backing up an NFS mounted partition (/mnt/matou), and since onefs is set to no, Bareos will traverse file systems. Now if /mnt/matou has the current machine’s file systems mounted, as is often the case, you will get yourself into a recursive loop and the backup will never end.
As a final example, let’s say that you have only one or two subdirectories of /home that you want to backup. For example, you want to backup only subdirectories beginning with the letter a and the letter b – i.e. /home/a*
and /home/b*
. Now, you might first try:
The problem is that the above will include everything in /home. To get things to work correctly, you need to start with the idea of exclusion instead of inclusion. So, you could simply exclude all directories except the two you want to use:
And assuming that all subdirectories start with a lowercase letter, this would work.
An alternative would be to include the two subdirectories desired and exclude everything else:
The following example shows how to back up only the My Pictures directory inside the My Documents directory for all users in C:/Documents and Settings, i.e. everything matching the pattern:
C:/Documents and Settings/*/My Documents/My Pictures/*
To understand how this can be achieved, there are two important points to remember:
Firstly, Bareos walks over the filesystem depth-first starting from the File = lines. It stops descending when a directory is excluded, so you must include all ancestor directories of each directory containing files to be included.
Secondly, each directory and file is compared to the Options clauses in the order they appear in the FileSet. When a match is found, no further clauses are compared and the directory or file is either included or excluded.
The FileSet resource definition below implements this by including specific directories and files and excluding everything else.
Windows FileSets
If you are entering Windows file names, the directory path may be preceded by the drive and a colon (as in c:). However, the path separators must be specified in Unix convention (i.e. forward slash (/)). If you wish to include a quote in a file name, precede the quote with a backslash (). For example you might use the following for a Windows machine to backup the “My Documents” directory:
For exclude lists to work correctly on Windows, you must observe the following rules:
Filenames are case sensitive, so you must use the correct case.
To exclude a directory, you must not have a trailing slash on the directory name.
If you have spaces in your filename, you must enclose the entire name in double-quote characters (“). Trying to use a backslash before the space will not work.
If you are using the old Exclude syntax (noted below), you may not specify a drive letter in the exclude. The new syntax noted above should work fine including driver letters.
Thanks to Thiago Lima for summarizing the above items for us. If you are having difficulties getting includes or excludes to work, you might want to try using the estimate job=job-name listing command documented in the Console Commands section of this manual.
On Win32 systems, if you move a directory or file or rename a file into the set of files being backed up, and a Full backup has already been made, Bareos will not know there are new files to be saved during an Incremental or Differential backup (blame Microsoft, not us). To avoid this problem, please copy any new directory or files into the backup area. If you do not have enough disk to copy the directory or files, move them, but then initiate a Full backup.
Example Fileset for Windows
The following example demonstrates a Windows FileSet. It backups all data from all fixed drives and only excludes some Windows temporary data.
File = /
includes all Windows drives. Using Drive Type = fixed
excludes drives like USB-Stick or CD-ROM Drive. Using WildDir = "[A-Z]:/RECYCLER"
excludes the backup of the directory RECYCLER
from all drives.
Testing Your FileSet
If you wish to get an idea of what your FileSet will really backup or if your exclusion rules will work correctly, you can test it by using the estimate command.
As an example, suppose you add the following test FileSet:
You could then add some test files to the directory /home/xxx/test and use the following command in the console:
estimate job=<any-job-name> listing client=<desired-client> fileset=Test
to give you a listing of all files that match. In the above example, it should be only files with names ending in .c.
Client Resource
The Client (or FileDaemon) resource defines the attributes of the Clients that are served by this Director; that is the machines that are to be backed up. You will need one Client resource definition for each machine to be backed up.
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
required |
||
None |
|||
= |
no |
deprecated |
|
= |
|||
= |
no |
||
= |
yes |
||
= |
|||
= |
no |
||
= |
yes |
||
= |
alias |
||
alias |
|||
= |
9102 |
alias |
|
= |
5184000 |
deprecated |
|
= |
0 |
||
= |
0 |
||
= |
15552000 |
deprecated |
|
= |
|||
= |
|||
= |
1 |
||
= |
required |
||
= |
64512 |
||
= |
4 |
||
= |
yes |
||
= |
no |
||
required |
|||
= |
9102 |
||
Native |
|||
= |
yes |
||
= |
0 |
||
= |
0 |
||
= |
no |
||
= |
no |
||
= |
yes |
||
= |
|||
= |
yes |
||
= |
no |
||
= |
- Address
- Required:
True
- Type:
Where the address is a host name, a fully qualified domain name, or a network address in dotted quad notation for a Bareos File server daemon. This directive is required.
- Auth Type
- Type:
- Default value:
None
Specifies the authentication type that must be supplied when connecting to a backup protocol that uses a specific authentication type.
- Auto Prune
- Type:
- Default value:
no
- Since Version:
deprecated
If set to yes, Bareos will automatically apply the
File Retention (Dir->Client)
period and theJob Retention (Dir->Client)
period for the client at the end of the job.Pruning affects only information in the catalog and not data stored in the backup archives (on Volumes), but if pruning deletes all data referring to a certain volume, the volume is regarded as empty and will possibly be overwritten before the volume retention has expired.
- Catalog
- Type:
This specifies the name of the catalog resource to be used for this Client. If none is specified the first defined catalog is used.
- Connection From Client To Director
- Type:
- Default value:
no
- Since Version:
16.2.2
The Director will accept incoming network connection from this Client.
For details, see Client Initiated Connection.
- Connection From Director To Client
- Type:
- Default value:
yes
- Since Version:
16.2.2
Let the Director initiate the network connection to the Client.
- Enable kTLS
- Type:
- Default value:
no
If set to “yes”, Bareos will allow the SSL implementation to use Kernel TLS.
- FD Password
- Type:
This directive is an alias.
- FD Port
- Type:
- Default value:
9102
This directive is an alias.
Where the port is a port number at which the Bareos File Daemon can be contacted. The default is 9102. For NDMP backups set this to 10000.
- File Retention
- Type:
- Default value:
5184000
- Since Version:
deprecated
The File Retention directive defines the length of time that Bareos will keep File records in the Catalog database after the End time of the Job corresponding to the File records. When this time period expires and
Auto Prune (Dir->Client) = yes
, Bareos will prune (remove) File records that are older than the specified File Retention period. Note, this affects only records in the catalog database. It does not affect your archive backups.File records may actually be retained for a shorter period than you specify on this directive if you specify either a shorter
Job Retention (Dir->Client)
or a shorterVolume Retention (Dir->Pool)
period. The shortest retention period of the three takes precedence.The default is 60 days.
- Hard Quota
- Type:
- Default value:
0
The amount of data determined by the Hard Quota directive sets the hard limit of backup space that cannot be exceeded. This is the maximum amount this client can back up before any backup job will be aborted.
If the Hard Quota is exceeded, the running job is terminated:
- Heartbeat Interval
- Type:
- Default value:
0
Optional and if specified set a keepalive interval (heartbeat) on the sockets between the defined Bareos File Daemon and Bareos Director.
If set, this value overrides
Heartbeat Interval (Dir->Director)
.See details in Heartbeat Interval - TCP Keepalive.
- Job Retention
- Type:
- Default value:
15552000
- Since Version:
deprecated
The Job Retention directive defines the length of time that Bareos will keep Job records in the Catalog database after the Job End time. When this time period expires and
Auto Prune (Dir->Client) = yes
Bareos will prune (remove) Job records that are older than the specified File Retention period. As with the other retention periods, this affects only records in the catalog and not data in your archive backup.If a Job record is selected for pruning, all associated File and JobMedia records will also be pruned regardless of the File Retention period set. As a consequence, you normally will set the File retention period to be less than the Job retention period. The Job retention period can actually be less than the value you specify here if you set the
Volume Retention (Dir->Pool)
directive to a smaller duration. This is because the Job retention period and the Volume retention period are independently applied, so the smaller of the two takes precedence.The default is 180 days.
- Lan Address
- Type:
- Since Version:
16.2.6
Sets additional address used for connections between Client and Storage Daemon inside separate network.
This directive might be useful in network setups where the Bareos Director and Bareos Storage Daemon need different addresses to communicate with the Bareos File Daemon.
For details, see Using different IP Adresses for SD – FD Communication.
This directive corresponds to
Lan Address (Dir->Storage)
.
- Maximum Bandwidth Per Job
- Type:
The speed parameter specifies the maximum allowed bandwidth that a job may use when started for this Client.
- Maximum Concurrent Jobs
- Type:
- Default value:
1
This directive specifies the maximum number of Jobs with the current Client that can run concurrently. Note, this directive limits only Jobs for Clients with the same name as the resource in which it appears. Any other restrictions on the maximum concurrent jobs such as in the Director, Job or Storage resources will also apply in addition to any limit specified here.
- Name
- Required:
True
- Type:
The name of the resource.
The client name which will be used in the Job resource directive or in the console run command.
- NDMP Block Size
- Type:
- Default value:
64512
This directive sets the default NDMP blocksize for this client.
- NDMP Log Level
- Type:
- Default value:
4
This directive sets the loglevel for the NDMP protocol library.
- Passive
- Type:
- Default value:
no
- Since Version:
13.2.0
If enabled, the Storage Daemon will initiate the network connection to the Client. If disabled, the Client will initiate the network connection to the Storage Daemon.
The normal way of initializing the data channel (the channel where the backup data itself is transported) is done by the file daemon (client) that connects to the storage daemon.
By using the client passive mode, the initialization of the datachannel is reversed, so that the storage daemon connects to the filedaemon.
See chapter Passive Client.
- Password
- Required:
True
- Type:
This is the password to be used when establishing a connection with the File services, so the Client configuration file on the machine to be backed up must have the same password defined for this Director.
The password is plain text.
- Protocol
- Type:
- Default value:
Native
- Since Version:
13.2.0
The backup protocol to use to run the Job.
Currently the director understands the following protocols:
Native - The native Bareos protocol
NDMP - The NDMP protocol
- Quota Include Failed Jobs
- Type:
- Default value:
yes
When calculating the amount a client used take into consideration any failed Jobs.
- Soft Quota
- Type:
- Default value:
0
This is the amount after which there will be a warning issued that a client is over his softquota. A client can keep doing backups until it hits the hard quota or when the
Soft Quota Grace Period (Dir->Client)
is expired.
- Soft Quota Grace Period
- Type:
- Default value:
0
Time allowed for a client to be over its
Soft Quota (Dir->Client)
before it will be enforced.When the amount of data backed up by the client outruns the value specified by the Soft Quota directive, the next start of a backup job will start the soft quota grace time period. This is written to the job log:
In the Job Overview, the value of Grace Expiry Date: will then change from Soft Quota was never exceeded to the date when the grace time expires, e.g. 11-Dec-2012 04:09:05.
During that period, it is possible to do backups even if the total amount of stored data exceeds the limit specified by soft quota.
If in this state, the job log will write:
After the grace time expires, in the next backup job of the client, the value for Burst Quota will be set to the value that the client has stored at this point in time. Also, the job will be terminated. The following information in the job log shows what happened:
At this point, it is not possible to do any backup of the client. To be able to do more backups, the amount of stored data for this client has to fall under the burst quota value.
- Strict Quotas
- Type:
- Default value:
no
The directive Strict Quotas determines whether, after the Grace Time Period is over, to enforce the Burst Limit (Strict Quotas = No) or the Soft Limit (Strict Quotas = Yes).
The Job Log shows either
or
- TLS Allowed CN
- Type:
“Common Name”s (CNs) of the allowed peer certificates.
- TLS Cipher Suites
- Type:
Colon separated list of valid TLSv1.3 Ciphers; see openssl ciphers -s -tls1_3. Leftmost element has the highest priority. Currently only SHA256 ciphers are supported.
- TLS DH File
- Type:
Path to PEM encoded Diffie-Hellman parameter file. If this directive is specified, DH key exchange will be used for the ephemeral keying, allowing for forward secrecy of communications.
- TLS Enable
- Type:
- Default value:
yes
Enable TLS support.
Bareos can be configured to encrypt all its network traffic. See chapter TLS Configuration Directives to see, how the Bareos Director (and the other components) must be configured to use TLS.
- TLS Key
- Type:
Path of a PEM encoded private key. It must correspond to the specified “TLS Certificate”.
- TLS Require
- Type:
- Default value:
yes
If set to “no”, Bareos can fall back to use unencrypted connections.
- TLS Verify Peer
- Type:
- Default value:
no
If disabled, all certificates signed by a known CA will be accepted. If enabled, the CN of a certificate must the Address or in the “TLS Allowed CN” list.
- Username
- Type:
Specifies the username that must be supplied when authenticating. Only used for the non Native protocols at the moment.
The following is an example of a valid Client resource definition:
The following is an example of a Quota Configuration in Client resource:
Storage Resource
The Storage resource defines which Storage daemons are available for use by the Director.
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
required |
||
= |
yes |
||
None |
|||
= |
no |
||
= |
30 |
||
= |
no |
deprecated |
|
= |
|||
= |
required |
||
= |
no |
||
= |
yes |
||
= |
0 |
||
= |
|||
= |
|||
= |
1 |
||
= |
0 |
||
= |
required |
||
= |
required |
||
= |
|||
= |
|||
required |
|||
= |
9103 |
||
Native |
|||
= |
alias |
||
alias |
|||
= |
9103 |
alias |
|
= |
no |
||
= |
yes |
||
= |
|||
= |
yes |
||
= |
no |
||
= |
- Address
- Required:
True
- Type:
Where the address is a host name, a fully qualified domain name, or an IP address. Please note that the <address> as specified here will be transmitted to the File daemon who will then use it to contact the Storage daemon. Hence, it is not, a good idea to use localhost as the name but rather a fully qualified machine name or an IP address. This directive is required.
- Allow Compression
- Type:
- Default value:
yes
This directive is optional, and if you specify No, it will cause backups jobs running on this storage resource to run without client File Daemon compression. This effectively overrides compression options in FileSets used by jobs which use this storage resource.
- Auth Type
- Type:
- Default value:
None
Specifies the authentication type that must be supplied when connecting to a backup protocol that uses a specific authentication type.
- Auto Changer
- Type:
- Default value:
no
When
Device (Dir->Storage)
refers to an Auto Changer (Autochanger (Sd->Device)
), this directive must be set to yes.If you specify yes,
Volume management command like label or add will request a Autochanger Slot number.
Bareos will prefer Volumes, that are in a Auto Changer slot. If none of theses volumes can be used, even after recycling, pruning, …, Bareos will search for any volume of the same
Media Type (Dir->Storage)
whether or not in the magazine.
Please consult the Autochanger & Tape drive Support chapter for details.
- Collect Statistics
- Type:
- Default value:
no
- Since Version:
deprecated
Collect statistic information. These information will be collected by the Director (see
Statistics Collect Interval (Dir->Director)
) and stored in the Catalog.
- Device
- Required:
True
- Type:
If
Protocol (Dir->Job)
is not NDMP_NATIVE (default isProtocol (Dir->Job) = Native
), this directive refers to one or multipleName (Sd->Device)
or a singleName (Sd->Autochanger)
.If an Autochanger should be used, it had to refer to a configured
Name (Sd->Autochanger)
. In this case, also setAuto Changer (Dir->Storage) = yes
.Otherwise it refers to one or more configured
Name (Sd->Device)
, see Using Multiple Storage Devices.This name is not the physical device name, but the logical device name as defined in the Bareos Storage Daemon resource.
If
Protocol (Dir->Job) = NDMP_NATIVE
, it refers to tape devices on the NDMP Tape Agent, see NDMP_NATIVE.
- Enable kTLS
- Type:
- Default value:
no
If set to “yes”, Bareos will allow the SSL implementation to use Kernel TLS.
- Heartbeat Interval
- Type:
- Default value:
0
Optional and if specified set a keepalive interval (heartbeat) on the sockets between the defined storage and Bareos Director.
If set, this value overrides
Heartbeat Interval (Dir->Director)
.See details in Heartbeat Interval - TCP Keepalive.
- Lan Address
- Type:
- Since Version:
16.2.6
Sets additional address used for connections between Client and Storage Daemon inside separate network.
This directive might be useful in network setups where the Bareos Director and Bareos File Daemon need different addresses to communicate with the Bareos Storage Daemon.
For details, see Using different IP Adresses for SD – FD Communication.
This directive corresponds to
Lan Address (Dir->Client)
.
- Maximum Concurrent Jobs
- Type:
- Default value:
1
This directive specifies the maximum number of Jobs with the current Storage resource that can run concurrently. Note, this directive limits only Jobs for Jobs using this Storage daemon. Any other restrictions on the maximum concurrent jobs such as in the Director, Job or Client resources will also apply in addition to any limit specified here.
If you set the Storage daemon’s number of concurrent jobs greater than one, we recommend that you read Concurrent Jobs and/or turn data spooling on as documented in Data Spooling.
- Maximum Concurrent Read Jobs
- Type:
- Default value:
0
This directive specifies the maximum number of Jobs with the current Storage resource that can read concurrently.
- Media Type
- Required:
True
- Type:
This directive specifies the Media Type to be used to store the data. This is an arbitrary string of characters up to 127 maximum that you define. It can be anything you want. However, it is best to make it descriptive of the storage media (e.g. File, DAT, “HP DLT8000”, 8mm, …). In addition, it is essential that you make the Media Type specification unique for each storage media type. If you have two DDS-4 drives that have incompatible formats, or if you have a DDS-4 drive and a DDS-4 autochanger, you almost certainly should specify different Media Types. During a restore, assuming a DDS-4 Media Type is associated with the Job, Bareos can decide to use any Storage daemon that supports Media Type DDS-4 and on any drive that supports it.
If you are writing to disk Volumes, you must make doubly sure that each Device resource defined in the Storage daemon (and hence in the Director’s conf file) has a unique media type. Otherwise Bareos may assume, these Volumes can be mounted and read by any Storage daemon File device.
Currently Bareos permits only a single Media Type per Storage Device definition. Consequently, if you have a drive that supports more than one Media Type, you can give a unique string to Volumes with different intrinsic Media Type (Media Type = DDS-3-4 for DDS-3 and DDS-4 types), but then those volumes will only be mounted on drives indicated with the dual type (DDS-3-4).
If you want to tie Bareos to using a single Storage daemon or drive, you must specify a unique Media Type for that drive. This is an important point that should be carefully understood. Note, this applies equally to Disk Volumes. If you define more than one disk Device resource in your Storage daemon’s conf file, the Volumes on those two devices are in fact incompatible because one can not be mounted on the other device since they are found in different directories. For this reason, you probably should use two different Media Types for your two disk Devices (even though you might think of them as both being File types). You can find more on this subject in the Basic Volume Management chapter of this manual.
The MediaType specified in the Director’s Storage resource, must correspond to the Media Type specified in the Device resource of the Storage daemon configuration file. This directive is required, and it is used by the Director and the Storage daemon to ensure that a Volume automatically selected from the Pool corresponds to the physical device. If a Storage daemon handles multiple devices (e.g. will write to various file Volumes on different partitions), this directive allows you to specify exactly which device.
As mentioned above, the value specified in the Director’s Storage resource must agree with the value specified in the Device resource in the Storage daemon’s configuration file. It is also an additional check so that you don’t try to write data for a DLT onto an 8mm device.
- Name
- Required:
True
- Type:
The name of the resource.
The name of the storage resource. This name appears on the Storage directive specified in the Job resource and is required.
- NDMP Changer Device
- Type:
- Since Version:
16.2.4
Allows direct control of a Storage Daemon Auto Changer device by the Director. Only used in NDMP_NATIVE environments.
- Paired Storage
- Type:
For NDMP backups this points to the definition of the Native Storage that is accesses via the NDMP protocol. For now we only support NDMP backups and restores to access Native Storage Daemons via the NDMP protocol. In the future we might allow to use Native NDMP storage which is not bound to a Bareos Storage Daemon.
- Password
- Required:
True
- Type:
This is the password to be used when establishing a connection with the Storage services. This same password also must appear in the Director resource of the Storage daemon’s configuration file. This directive is required.
The password is plain text.
- Port
- Type:
- Default value:
9103
Where port is the port to use to contact the storage daemon for information and to start jobs. This same port number must appear in the Storage resource of the Storage daemon’s configuration file.
- Protocol
- Type:
- Default value:
Native
- SD Password
- Type:
Alias for Password.
- TLS Allowed CN
- Type:
“Common Name”s (CNs) of the allowed peer certificates.
- TLS Cipher Suites
- Type:
Colon separated list of valid TLSv1.3 Ciphers; see openssl ciphers -s -tls1_3. Leftmost element has the highest priority. Currently only SHA256 ciphers are supported.
- TLS DH File
- Type:
Path to PEM encoded Diffie-Hellman parameter file. If this directive is specified, DH key exchange will be used for the ephemeral keying, allowing for forward secrecy of communications.
- TLS Enable
- Type:
- Default value:
yes
Enable TLS support.
Bareos can be configured to encrypt all its network traffic. For details, refer to chapter TLS Configuration Directives.
- TLS Key
- Type:
Path of a PEM encoded private key. It must correspond to the specified “TLS Certificate”.
- TLS Require
- Type:
- Default value:
yes
If set to “no”, Bareos can fall back to use unencrypted connections.
- TLS Verify Peer
- Type:
- Default value:
no
If disabled, all certificates signed by a known CA will be accepted. If enabled, the CN of a certificate must the Address or in the “TLS Allowed CN” list.
The following is an example of a valid Storage resource definition:
Pool Resource
The Pool resource defines the set of storage Volumes (tapes or files) to be used by Bareos to write the data. By configuring different Pools, you can determine which set of Volumes (media) receives the backup data. This permits, for example, to store all full backup data on one set of Volumes and all incremental backups on another set of Volumes. Alternatively, you could assign a different set of Volumes to each machine that you backup. This is most easily done by defining multiple Pools.
Another important aspect of a Pool is that it contains the default attributes (Maximum Jobs, Retention Period, Recycle flag, …) that will be given to a Volume when it is created. This avoids the need for you to answer a large number of questions when labeling a new Volume. Each of these attributes can later be changed on a Volume by Volume basis using the update command in the console program. Note that you must explicitly specify which Pool Bareos is to use with each Job. Bareos will not automatically search for the correct Pool.
To use a Pool, there are three distinct steps. First the Pool must be defined in the Director’s configuration. Then the Pool must be written to the Catalog database. This is done automatically by the Director each time that it starts. Finally, if you change the Pool definition in the Director’s configuration file and restart Bareos, the pool will be updated alternatively you can use the update pool console command to refresh the database image. It is this database image rather than the Director’s resource image that is used for the default Volume attributes. Note, for the pool to be automatically created or updated, it must be explicitly referenced by a Job resource.
If automatic labeling is not enabled (see Automatic Volume Labeling) the physical media must be manually labeled. The labeling can either be done with the label command in the console program or using the btape program. The preferred method is to use the label command in the console program. Generally, automatic labeling is enabled for Device Type (Sd->Device) = File
and disabled for Device Type (Sd->Device) = Tape
.
Finally, you must add Volume names (and their attributes) to the Pool. For Volumes to be used by Bareos they must be of the same Media Type (Sd->Device)
as the archive device specified for the job (i.e. if you are going to back up to a DLT device, the Pool must have DLT volumes defined since 8mm volumes cannot be mounted on a DLT drive). The Media Type (Sd->Device)
has particular importance if you are backing up to files.
When running a Job, you must explicitly specify which Pool to use. Bareos will then automatically select the next Volume to use from the Pool, but it will ensure that the Media Type (Sd->Device)
of any Volume selected from the Pool is identical to that required by the Storage resource you have specified for the Job.
If you use the label command in the console program to label the Volumes, they will automatically be added to the Pool, so this last step is not normally required.
It is also possible to add Volumes to the database without explicitly labeling the physical volume. This is done with the add console command.
As previously mentioned, each time Bareos starts, it scans all the Pools associated with each Catalog, and if the database record does not already exist, it will be created from the Pool Resource definition. If you change the Pool definition, you manually have to call update pool command in the console program to propagate the changes to existing volumes.
The Pool Resource defined in the Director’s configuration may contain the following directives:
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
yes |
||
= |
|||
= |
yes |
||
= |
CLN |
||
= |
|||
= |
|||
= |
|||
= |
|||
= |
deprecated |
||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
required |
||
= |
|||
= |
Backup |
||
= |
no |
||
= |
yes |
||
= |
no |
||
= |
no |
||
= |
|||
= |
|||
= |
yes |
||
= |
31536000 |
||
= |
- Action On Purge
- Type:
The directive
Action On Purge=Truncate
instructs Bareos to truncate the volume when it is purged with the purge volume action=truncate command. It is useful to prevent disk based volumes from consuming too much space.
- Auto Prune
- Type:
- Default value:
yes
If
Auto Prune=yes
, theVolume Retention (Dir->Pool)
period is automatically applied when a new Volume is needed and no appendable Volumes exist in the Pool. Volume pruning causes expired Jobs (older than theVolume Retention (Dir->Pool)
period) to be deleted from the Catalog and permits possible recycling of the Volume.
- Catalog
- Type:
This specifies the name of the catalog resource to be used for this Pool. When a catalog is defined in a Pool it will override the definition in the client (and the Catalog definition in a Job since Version >= 13.4.0). e.g. this catalog setting takes precedence over any other definition.
- Catalog Files
- Type:
- Default value:
yes
This directive defines whether or not you want the names of the files that were saved to be put into the catalog. If disabled, the Catalog database will be significantly smaller. The disadvantage is that you will not be able to produce a Catalog listing of the files backed up for each Job (this is often called Browsing). Also, without the File entries in the catalog, you will not be able to use the Console restore command nor any other command that references File entries.
- Cleaning Prefix
- Type:
- Default value:
CLN
This directive defines a prefix string, which if it matches the beginning of a Volume name during labeling of a Volume, the Volume will be defined with the VolStatus set to Cleaning and thus Bareos will never attempt to use this tape. This is primarily for use with autochangers that accept barcodes where the convention is that barcodes beginning with CLN are treated as cleaning tapes.
The default value for this directive is consequently set to CLN, so that in most cases the cleaning tapes are automatically recognized without configuration. If you use another prefix for your cleaning tapes, you can set this directive accordingly.
- File Retention
- Type:
The File Retention directive defines the length of time that Bareos will keep File records in the Catalog database after the End time of the Job corresponding to the File records.
This directive takes precedence over Client directives of the same name. For example, you can decide to increase Retention times for Archive or OffSite Pool.
Note, this affects only records in the catalog database. It does not affect your archive backups.
For more information see Client documentation about
File Retention (Dir->Client)
- Job Retention
- Type:
The Job Retention directive defines the length of time that Bareos will keep Job records in the Catalog database after the Job End time. As with the other retention periods, this affects only records in the catalog and not data in your archive backup.
This directive takes precedence over Client directives of the same name. For example, you can decide to increase Retention times for Archive or OffSite Pool.
For more information see Client side documentation
Job Retention (Dir->Client)
- Label Format
- Type:
This directive specifies the format of the labels contained in this pool. The format directive is used as a sort of template to create new Volume names during automatic Volume labeling.
The format should be specified in double quotes (
"
), and consists of letters, numbers and the special characters hyphen (-
), underscore (_
), colon (:
), and period (.
), which are the legal characters for a Volume name.In addition, the format may contain a number of variable expansion characters which will be expanded by a complex algorithm allowing you to create Volume names of many different formats. In all cases, the expansion process must resolve to the set of characters noted above that are legal Volume names. Generally, these variable expansion characters begin with a dollar sign (
$
) or a left bracket ([
). For more details on variable expansion, please see Variable Expansion on Volume Labels.If no variable expansion characters are found in the string, the Volume name will be formed from the format string appended with the a unique number that increases. If you do not remove volumes from the pool, this number should be the number of volumes plus one, but this is not guaranteed. The unique number will be edited as four digits with leading zeros. For example, with a Label Format = “File-”, the first volumes will be named File-0001, File-0002, …
In almost all cases, you should enclose the format specification (part after the equal sign) in double quotes (
"
).
- Label Type
- Type:
- Since Version:
deprecated
This directive is implemented in the Director Pool resource and in the SD Device resource (
Label Type (Sd->Device)
). If it is specified in the SD Device resource, it will take precedence over the value passed from the Director to the SD.
- Maximum Block Size
- Type:
- Since Version:
14.2.0
The Maximum Block Size can be defined here to define different block sizes per volume or statically for all volumes at
Maximum Block Size (Sd->Device)
. Increasing this value may improve the throughput of writing to tapes.Warning
However make sure to read the Setting Block Sizes chapter carefully before applying any changes.
- Maximum Volume Bytes
- Type:
This directive specifies the maximum number of bytes that can be written to the Volume. If you specify zero (the default), there is no limit except the physical size of the Volume. Otherwise, when the number of bytes written to the Volume equals size the Volume will be marked Used. When the Volume is marked Used it can no longer be used for appending Jobs, much like the Full status but it can be recycled if recycling is enabled, and thus the Volume can be re-used after recycling. This value is checked and the Used status set while the job is writing to the particular volume.
This directive is particularly useful for restricting the size of disk volumes, and will work correctly even in the case of multiple simultaneous jobs writing to the volume.
The value defined by this directive in the bareos-dir.conf file is the default value used when a Volume is created. Once the volume is created, changing the value in the bareos-dir.conf file will not change what is stored for the Volume. To change the value for an existing Volume you must use the update command in the Console.
- Maximum Volume Files
- Type:
This directive specifies the maximum number of files that can be written to the Volume. If you specify zero (the default), there is no limit. Otherwise, when the number of files written to the Volume equals positive-integer the Volume will be marked Used. When the Volume is marked Used it can no longer be used for appending Jobs, much like the Full status but it can be recycled if recycling is enabled and thus used again. This value is checked and the Used status is set only at the end of a job that writes to the particular volume.
The value defined by this directive in the bareos-dir.conf file is the default value used when a Volume is created. Once the volume is created, changing the value in the bareos-dir.conf file will not change what is stored for the Volume. To change the value for an existing Volume you must use the update command in the Console.
- Maximum Volume Jobs
- Type:
This directive specifies the maximum number of Jobs that can be written to the Volume. If you specify zero (the default), there is no limit. Otherwise, when the number of Jobs backed up to the Volume equals positive-integer the Volume will be marked Used. When the Volume is marked Used it can no longer be used for appending Jobs, much like the Full status but it can be recycled if recycling is enabled, and thus used again. By setting MaximumVolumeJobs to one, you get the same effect as setting UseVolumeOnce = yes.
The value defined by this directive in the bareos-dir.conf file is the default value used when a Volume is created. Once the volume is created, changing the value in the bareos-dir.conf file will not change what is stored for the Volume. To change the value for an existing Volume you must use the update command in the Console.
If you are running multiple simultaneous jobs, this directive may not work correctly because when a drive is reserved for a job, this directive is not taken into account, so multiple jobs may try to start writing to the Volume. At some point, when the Media record is updated, multiple simultaneous jobs may fail since the Volume can no longer be written.
- Maximum Volumes
- Type:
This directive specifies the maximum number of volumes (tapes or files) contained in the pool. This directive is optional, if omitted or set to zero, any number of volumes will be permitted. In general, this directive is useful to ensure that the number of volumes does not become too numerous when using automatic labeling.
- Migration High Bytes
- Type:
This directive specifies the number of bytes in the Pool which will trigger a migration if
Selection Type (Dir->Job)
= PoolOccupancy has been specified. The fact that the Pool usage goes above this level does not automatically trigger a migration job. However, if a migration job runs and has the PoolOccupancy selection type set, the Migration High Bytes will be applied. Bareos does not currently restrict a pool to have only a singleMedia Type (Dir->Storage)
, so you must keep in mind that if you mix Media Types in a Pool, the results may not be what you want, as the Pool count of all bytes will be for all Media Types combined.
- Migration Low Bytes
- Type:
This directive specifies the number of bytes in the Pool which will stop a migration if
Selection Type (Dir->Job)
= PoolOccupancy has been specified and triggered by more thanMigration High Bytes (Dir->Pool)
being in the pool. In other words, once a migration job is started with PoolOccupancy migration selection and it determines that there are more than Migration High Bytes, the migration job will continue to run jobs until the number of bytes in the Pool drop to or below Migration Low Bytes.
- Migration Time
- Type:
If
Selection Type (Dir->Job)
= PoolTime, the time specified here will be used. If the previous Backup Job or Jobs selected have been in the Pool longer than the specified time, then they will be migrated.
- Minimum Block Size
- Type:
The Minimum Block Size can be defined here to define different block sizes per volume or statically for all volumes at
Minimum Block Size (Sd->Device)
. For details, see chapter Setting Block Sizes.
- Next Pool
- Type:
This directive specifies the pool a Migration or Copy Job and a Virtual Backup Job will write their data too. This directive is required to define the Pool into which the data will be migrated. Without this directive, the migration job will terminate in error.
- Pool Type
- Type:
- Default value:
Backup
This directive defines the pool type, which corresponds to the type of Job being run. It is required and may be one of the following:
Backup
*Archive
*Cloned
*Migration
*Copy
*Save
Note, only Backup is currently implemented.
- Purge Oldest Volume
- Type:
- Default value:
no
This directive instructs the Director to search for the oldest used Volume in the Pool when another Volume is requested by the Storage daemon and none are available. The catalog is then purged irrespective of retention periods of all Files and Jobs written to this Volume. The Volume is then recycled and will be used as the next Volume to be written. This directive overrides any Job, File, or Volume retention periods that you may have specified.
This directive can be useful if you have a fixed number of Volumes in the Pool and you want to cycle through them and reusing the oldest one when all Volumes are full, but you don’t want to worry about setting proper retention periods. However, by using this option you risk losing valuable data.
In most cases, you should use
Recycle Oldest Volume (Dir->Pool)
instead.Warning
Be aware that Purge Oldest Volume disregards all retention periods. If you have only a single Volume defined and you turn this variable on, that Volume will always be immediately overwritten when it fills! So at a minimum, ensure that you have a decent number of Volumes in your Pool before running any jobs. If you want retention periods to apply do not use this directive.\ We highly recommend against using this directive, because it is sure that some day, Bareos will purge a Volume that contains current data.
- Recycle
- Type:
- Default value:
yes
This directive specifies whether or not Purged Volumes may be recycled. If it is set to yes and Bareos needs a volume but finds none that are appendable, it will search for and recycle (reuse) Purged Volumes (i.e. volumes with all the Jobs and Files expired and thus deleted from the Catalog). If the Volume is recycled, all previous data written to that Volume will be overwritten. If Recycle is set to no, the Volume will not be recycled, and hence, the data will remain valid. If you want to reuse (re-write) the Volume, and the recycle flag is no (0 in the catalog), you may manually set the recycle flag (update command) for a Volume to be reused.
Please note that the value defined by this directive in the configuration file is the default value used when a Volume is created. Once the volume is created, changing the value in the configuration file will not change what is stored for the Volume. To change the value for an existing Volume you must use the update volume command.
When all Job and File records have been pruned or purged from the catalog for a particular Volume, if that Volume is marked as Append, Full, Used, or Error, it will then be marked as Purged. Only Volumes marked as Purged will be considered to be converted to the Recycled state if the Recycle directive is set to yes.
- Recycle Current Volume
- Type:
- Default value:
no
If Bareos needs a new Volume, this directive instructs Bareos to Prune the volume respecting the Job and File retention periods. If all Jobs are pruned (i.e. the volume is Purged), then the Volume is recycled and will be used as the next Volume to be written. This directive respects any Job, File, or Volume retention periods that you may have specified.
This directive can be useful if you have: a fixed number of Volumes in the Pool, you want to cycle through them, and you have specified retention periods that prune Volumes before you have cycled through the Volume in the Pool.
However, if you use this directive and have only one Volume in the Pool, you will immediately recycle your Volume if you fill it and Bareos needs another one. Thus your backup will be totally invalid. Please use this directive with care.
- Recycle Oldest Volume
- Type:
- Default value:
no
This directive instructs the Director to search for the oldest used Volume in the Pool when another Volume is requested by the Storage daemon and none are available. The catalog is then pruned respecting the retention periods of all Files and Jobs written to this Volume. If all Jobs are pruned (i.e. the volume is Purged), then the Volume is recycled and will be used as the next Volume to be written. This directive respects any Job, File, or Volume retention periods that you may have specified.
This directive can be useful if you have a fixed number of Volumes in the Pool and you want to cycle through them and you have specified the correct retention periods.
However, if you use this directive and have only one Volume in the Pool, you will immediately recycle your Volume if you fill it and Bareos needs another one. Thus your backup will be totally invalid. Please use this directive with care.
- Recycle Pool
- Type:
This directive defines to which pool the Volume will be placed (moved) when it is recycled. Without this directive, a Volume will remain in the same pool when it is recycled. With this directive, it can be moved automatically to any existing pool during a recycle. This directive is probably most useful when defined in the Scratch pool, so that volumes will be recycled back into the Scratch pool. For more on the see the Scratch Pool section of this manual.
Although this directive is called RecyclePool, the Volume in question is actually moved from its current pool to the one you specify on this directive when Bareos prunes the Volume and discovers that there are no records left in the catalog and hence marks it as Purged.
- Scratch Pool
- Type:
This directive permits to specify a dedicate Scratch for the current pool. This pool will replace the special pool named Scrach for volume selection. For more information about Scratch see Scratch Pool section of this manual. This is useful when using multiple storage sharing the same mediatype or when you want to dedicate volumes to a particular set of pool.
- Storage
- Type:
The Storage directive defines the name of the storage services where you want to backup the FileSet data. For additional details, see the Storage Resource of this manual. The Storage resource may also be specified in the Job resource, but the value, if any, in the Pool resource overrides any value in the Job. This Storage resource definition is not required by either the Job resource or in the Pool, but it must be specified in one or the other. If not configuration error will result. We highly recommend that you define the Storage resource to be used in the Pool rather than elsewhere (job, schedule run, …). Be aware that you theoretically can give a list of storages here but only the first item from the list is actually used for backup and restore jobs.
- Use Catalog
- Type:
- Default value:
yes
Store information into Catalog. In all pratical use cases, leave this value to its defaults.
- Volume Retention
- Type:
- Default value:
31536000
The Volume Retention directive defines the length of time that Bareos will keep records associated with the Volume in the Catalog database after the End time of each Job written to the Volume. When this time period expires, and if AutoPrune is set to yes Bareos may prune (remove) Job records that are older than the specified Volume Retention period if it is necessary to free up a Volume. Recycling will not occur until it is absolutely necessary to free up a volume (i.e. no other writable volume exists). All File records associated with pruned Jobs are also pruned. The time may be specified as seconds, minutes, hours, days, weeks, months, quarters, or years. The Volume Retention is applied independently of the Job Retention and the File Retention periods defined in the Client resource. This means that all the retentions periods are applied in turn and that the shorter period is the one that effectively takes precedence. Note, that when the Volume Retention period has been reached, and it is necessary to obtain a new volume, Bareos will prune both the Job and the File records. This pruning could also occur during a status dir command because it uses similar algorithms for finding the next available Volume.
It is important to know that when the Volume Retention period expires, Bareos does not automatically recycle a Volume. It attempts to keep the Volume data intact as long as possible before over writing the Volume.
By defining multiple Pools with different Volume Retention periods, you may effectively have a set of tapes that is recycled weekly, another Pool of tapes that is recycled monthly and so on. However, one must keep in mind that if your Volume Retention period is too short, it may prune the last valid Full backup, and hence until the next Full backup is done, you will not have a complete backup of your system, and in addition, the next Incremental or Differential backup will be promoted to a Full backup. As a consequence, the minimum Volume Retention period should be at twice the interval of your Full backups. This means that if you do a Full backup once a month, the minimum Volume retention period should be two months.
The default Volume retention period is 365 days, and either the default or the value defined by this directive in the bareos-dir.conf file is the default value used when a Volume is created. Once the volume is created, changing the value in the
bareos-dir.conf
file will not change what is stored for the Volume. To change the value for an existing Volume you must use the update command in the Console.
- Volume Use Duration
- Type:
The Volume Use Duration directive defines the time period that the Volume can be written beginning from the time of first data write to the Volume. If the time-period specified is zero (the default), the Volume can be written indefinitely. Otherwise, the next time a job runs that wants to access this Volume, and the time period from the first write to the volume (the first Job written) exceeds the time-period-specification, the Volume will be marked Used, which means that no more Jobs can be appended to the Volume, but it may be recycled if recycling is enabled. Once the Volume is recycled, it will be available for use again.
You might use this directive, for example, if you have a Volume used for Incremental backups, and Volumes used for Weekly Full backups. Once the Full backup is done, you will want to use a different Incremental Volume. This can be accomplished by setting the Volume Use Duration for the Incremental Volume to six days. I.e. it will be used for the 6 days following a Full save, then a different Incremental volume will be used. Be careful about setting the duration to short periods such as 23 hours, or you might experience problems of Bareos waiting for a tape over the weekend only to complete the backups Monday morning when an operator mounts a new tape.
Please note that the value defined by this directive in the bareos-dir.conf file is the default value used when a Volume is created. Once the volume is created, changing the value in the bareos-dir.conf file will not change what is stored for the Volume. To change the value for an existing Volume you must use the :ref:` update volume <UpdateCommand>` command in the Console.
The following is an example of a valid Pool resource definition:
Scratch Pool
In general, you can give your Pools any name you wish, but there is one important restriction: the Pool named Scratch, if it exists behaves like a scratch pool of Volumes in that when Bareos needs a new Volume for writing and it cannot find one, it will look in the Scratch pool, and if it finds an available Volume, it will move it out of the Scratch pool into the Pool currently being used by the job.
Catalog Resource
The Catalog Resource defines what catalog to use for the current job.
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
alias |
||
= |
|||
= |
required |
||
= |
|||
= |
|||
= |
|||
= |
|||
= |
no |
||
= |
no |
||
= |
30 |
||
= |
1 |
||
= |
5 |
||
= |
1 |
||
= |
|||
= |
required |
||
alias |
|||
= |
yes |
||
= |
alias |
||
= |
120 |
- Address
- Type:
This directive is an alias.
Alias for
DB Address (Dir->Catalog)
.
- DB Address
- Type:
This is the host address of the database server. Normally, you would specify this instead of
DB Socket (Dir->Catalog)
if the database server is on another machine. In that case, you will also specifyDB Port (Dir->Catalog)
.
- DB Password
- Type:
This specifies the password to use when login into the database.
- DB Port
- Type:
This defines the port to be used in conjunction with
DB Address (Dir->Catalog)
to access the database if it is on another machine.
- DB Socket
- Type:
This is the name of a socket to use on the local host to connect to the database. Normally, if neither
DB Socket (Dir->Catalog)
orDB Address (Dir->Catalog)
are specified, the default socket will be used.
- Disable Batch Insert
- Type:
- Default value:
no
This directive allows you to override at runtime if the Batch insert should be enabled or disabled. Normally this is determined by querying the database library if it is thread-safe. If you think that disabling Batch insert will make your backup run faster you may disable it using this option and set it to Yes.
- Exit On Fatal
- Type:
- Default value:
no
- Since Version:
15.1.0
Make any fatal error in the connection to the database exit the program
- Idle Timeout
- Type:
- Default value:
30
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the idle time after which a database pool should be shrinked.
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the idle time after which a database pool should be shrinked.
- Inc Connections
- Type:
- Default value:
1
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the number of connections to add to a database pool when not enough connections are available on the pool anymore.
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the number of connections to add to a database pool when not enough connections are available on the pool anymore.
- Max Connections
- Type:
- Default value:
5
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the maximum number of connections to a database to keep in this database pool.
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the maximum number of connections to a database to keep in this database pool.
- Min Connections
- Type:
- Default value:
1
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the minimum number of connections to a database to keep in this database pool.
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the minimum number of connections to a database to keep in this database pool.
- Name
- Required:
True
- Type:
The name of the resource.
The name of the Catalog. No necessary relation to the database server name. This name will be specified in the Client resource directive indicating that all catalog data for that Client is maintained in this Catalog.
- Password
- Type:
This directive is an alias.
Alias for
DB Password (Dir->Catalog)
.
- Reconnect
- Type:
- Default value:
yes
- Since Version:
15.1.0
Try to reconnect a database connection when it is dropped
- User
- Type:
This directive is an alias.
Alias for
DB User (Dir->Catalog)
.
- Validate Timeout
- Type:
- Default value:
120
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the validation timeout after which the database connection is polled to see if its still alive.
This directive is used by the experimental database pooling functionality. Only use this for non production sites. This sets the validation timeout after which the database connection is polled to see if its still alive.
The following is an example of a valid Catalog resource definition:
or for a Catalog on another machine:
Messages Resource
For the details of the Messages Resource, please see the Messages Configuration of this manual.
Console Resource
There are three different kinds of consoles, which the administrator or user can use to interact with the Director. These three kinds of consoles comprise three different security levels.
- Default Console
the first console type is an “anonymous” or “default” console, which has full privileges. There is no console resource necessary for this type since the password is specified in the Director’s resource and consequently such consoles do not have a name as defined on a Name directive. Typically you would use it only for administrators.
- Named Console
the second type of console, is a “named” console (also called “Restricted Console”) defined within a Console resource in both the Director’s configuration file and in the Console’s configuration file. Both the names and the passwords in these two entries must match much as is the case for Client programs.
This second type of console begins with absolutely no privileges except those explicitly specified in the Director’s Console resource. Thus you can have multiple Consoles with different names and passwords, sort of like multiple users, each with different privileges. As a default, these consoles can do absolutely nothing – no commands whatsoever. You give them privileges or rather access to commands and resources by specifying access control lists in the Director’s Console resource. The ACLs are specified by a directive followed by a list of access names. Examples of this are shown below.
The third type of console is similar to the above mentioned one in that it requires a Console resource definition in both the Director and the Console. In addition, if the console name, provided on the
Name (Dir->Console)
directive, is the same as a Client name, that console is permitted to use the SetIP command to change the Address directive in the Director’s client resource to the IP address of the Console. This permits portables or other machines using DHCP (non-fixed IP addresses) to “notify” the Director of their current IP address.
The Console resource is optional and need not be specified. The following directives are permitted within these resources:
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
|||
= |
|||
= |
|||
= |
|||
= |
no |
||
= |
|||
= |
|||
= |
required |
||
required |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
no |
||
= |
yes |
||
= |
|||
= |
yes |
||
= |
no |
||
= |
no |
||
= |
- Catalog ACL
- Type:
Lists the Catalog resources, this resource has access to. The special keyword all allows access to all Catalog resources.
This directive is used to specify a list of Catalog resource names that can be accessed by the console.
- Client ACL
- Type:
Lists the Client resources, this resource has access to. The special keyword all allows access to all Client resources.
This directive is used to specify a list of Client resource names that can be accessed by the console.
- Command ACL
- Type:
Lists the commands, this resource has access to. The special keyword all allows using commands.
This directive is used to specify a list of of console commands that can be executed by the console. See Command ACL example.
- Enable kTLS
- Type:
- Default value:
no
If set to “yes”, Bareos will allow the SSL implementation to use Kernel TLS.
- File Set ACL
- Type:
Lists the File Set resources, this resource has access to. The special keyword all allows access to all File Set resources.
This directive is used to specify a list of FileSet resource names that can be accessed by the console.
- Job ACL
- Type:
Lists the Job resources, this resource has access to. The special keyword all allows access to all Job resources.
This directive is used to specify a list of Job resource names that can be accessed by the console. Without this directive, the console cannot access any of the Director’s Job resources. Multiple Job resource names may be specified by separating them with commas, and/or by specifying multiple Job ACL directives. For example, the directive may be specified as:
With the above specification, the console can access the Director’s resources for the jobs named on the Job ACL directives, but for no others.
- Name
- Required:
True
- Type:
The name of the console. This name must match the name specified at the Console client.
- Password
- Required:
True
- Type:
Specifies the password that must be supplied for a named Bareos Console to be authorized.
- Plugin Options ACL
- Type:
Specifies the allowed plugin options. An empty strings allows all Plugin Options.
Use this directive to specify the list of allowed Plugin Options.
- Pool ACL
- Type:
Lists the Pool resources, this resource has access to. The special keyword all allows access to all Pool resources.
This directive is used to specify a list of Pool resource names that can be accessed by the console.
- Profile
- Type:
- Since Version:
14.2.3
Profiles can be assigned to a Console. ACL are checked until either a deny ACL is found or an allow ACL. First the console ACL is checked then any profile the console is linked to.
One or more Profile names can be assigned to a Console. If an ACL is not defined in the Console, the profiles of the Console will be checked in the order as specified here. The first found ACL will be used. See Profile Resource.
- Schedule ACL
- Type:
Lists the Schedule resources, this resource has access to. The special keyword all allows access to all Schedule resources.
This directive is used to specify a list of Schedule resource names that can be accessed by the console.
- Storage ACL
- Type:
Lists the Storage resources, this resource has access to. The special keyword all allows access to all Storage resources.
This directive is used to specify a list of Storage resource names that can be accessed by the console.
- TLS Allowed CN
- Type:
“Common Name”s (CNs) of the allowed peer certificates.
- TLS Cipher Suites
- Type:
Colon separated list of valid TLSv1.3 Ciphers; see openssl ciphers -s -tls1_3. Leftmost element has the highest priority. Currently only SHA256 ciphers are supported.
- TLS DH File
- Type:
Path to PEM encoded Diffie-Hellman parameter file. If this directive is specified, DH key exchange will be used for the ephemeral keying, allowing for forward secrecy of communications.
- TLS Enable
- Type:
- Default value:
yes
Enable TLS support.
Bareos can be configured to encrypt all its network traffic. See chapter TLS Configuration Directives to see, how the Bareos Director (and the other components) must be configured to use TLS.
- TLS Key
- Type:
Path of a PEM encoded private key. It must correspond to the specified “TLS Certificate”.
- TLS Require
- Type:
- Default value:
yes
If set to “no”, Bareos can fall back to use unencrypted connections.
- TLS Verify Peer
- Type:
- Default value:
no
If disabled, all certificates signed by a known CA will be accepted. If enabled, the CN of a certificate must the Address or in the “TLS Allowed CN” list.
- Use Pam Authentication
- Type:
- Default value:
no
- Since Version:
18.2.4
If set to yes, PAM will be used to authenticate the user on this console. Otherwise, only the credentials of this console resource are used for authentication.
- Where ACL
- Type:
Specifies the base directories, where files could be restored. An empty string allows restores to all directories.
This directive permits you to specify where a restricted console can restore files. If this directive is not specified, only the default restore location is permitted (normally
/tmp/bareos-restores
. If *all* is specified any path the user enters will be accepted. Any other value specified (there may be multiple Where ACL directives) will restrict the user to use that path. For example, on a Unix system, if you specify “/”, the file will be restored to the original location.
The example at Using Named Consoles shows how to use a console resource for a connection from a client like bconsole.
User Resource
Each user who wants to login using PAM needs a dedicated User Resource in the Bareos Director configuration. The main purpose is to configure ACLs as shown in the table below, they are the same as in the Console Resource and the Profile Resource.
If a user is authenticated with PAM but is not authorized by a user resource, the login will be denied by the Bareos Director.
Refer to chapter Pluggable Authentication Modules (PAM) for details how to configure PAM.
The following table contains all configurable directives in the User Resource:
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
required |
||
= |
|||
= |
|||
= |
|||
= |
|||
= |
- Catalog ACL
- Type:
Lists the Catalog resources, this resource has access to. The special keyword all allows access to all Catalog resources.
- Client ACL
- Type:
Lists the Client resources, this resource has access to. The special keyword all allows access to all Client resources.
- Command ACL
- Type:
Lists the commands, this resource has access to. The special keyword all allows using commands.
- File Set ACL
- Type:
Lists the File Set resources, this resource has access to. The special keyword all allows access to all File Set resources.
- Job ACL
- Type:
Lists the Job resources, this resource has access to. The special keyword all allows access to all Job resources.
- Plugin Options ACL
- Type:
Specifies the allowed plugin options. An empty strings allows all Plugin Options.
- Pool ACL
- Type:
Lists the Pool resources, this resource has access to. The special keyword all allows access to all Pool resources.
- Profile
- Type:
- Since Version:
14.2.3
Profiles can be assigned to a Console. ACL are checked until either a deny ACL is found or an allow ACL. First the console ACL is checked then any profile the console is linked to.
- Schedule ACL
- Type:
Lists the Schedule resources, this resource has access to. The special keyword all allows access to all Schedule resources.
- Storage ACL
- Type:
Lists the Storage resources, this resource has access to. The special keyword all allows access to all Storage resources.
Profile Resource
The Profile Resource defines a set of ACLs. Console Resource can be tight to one or more profiles (Profile (Dir->Console)
), making it easier to use a common set of ACLs.
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
|||
= |
required |
||
= |
|||
= |
|||
= |
|||
= |
|||
= |
- Catalog ACL
- Type:
Lists the Catalog resources, this resource has access to. The special keyword all allows access to all Catalog resources.
- Client ACL
- Type:
Lists the Client resources, this resource has access to. The special keyword all allows access to all Client resources.
- Command ACL
- Type:
Lists the commands, this resource has access to. The special keyword all allows using commands.
- File Set ACL
- Type:
Lists the File Set resources, this resource has access to. The special keyword all allows access to all File Set resources.
- Job ACL
- Type:
Lists the Job resources, this resource has access to. The special keyword all allows access to all Job resources.
- Plugin Options ACL
- Type:
Specifies the allowed plugin options. An empty strings allows all Plugin Options.
- Pool ACL
- Type:
Lists the Pool resources, this resource has access to. The special keyword all allows access to all Pool resources.
- Schedule ACL
- Type:
Lists the Schedule resources, this resource has access to. The special keyword all allows access to all Schedule resources.
- Storage ACL
- Type:
Lists the Storage resources, this resource has access to. The special keyword all allows access to all Storage resources.
Counter Resource
The Counter Resource defines a counter variable that can be accessed by variable expansion used for creating Volume labels with the Label Format (Dir->Pool)
directive.
configuration directive name |
type of data |
default value |
remark |
---|---|---|---|
= |
|||
= |
|||
= |
2147483647 |
||
= |
0 |
||
= |
required |
||
= |
- Catalog
- Type:
If this directive is specified, the counter and its values will be saved in the specified catalog. If this directive is not present, the counter will be redefined each time that Bareos is started.
- Maximum
- Type:
- Default value:
2147483647
This is the maximum value value that the counter can have. If not specified or set to zero, the counter can have a maximum value of 2,147,483,648 (2 to the 31 power). When the counter is incremented past this value, it is reset to the Minimum.
- Minimum
- Type:
- Default value:
0
This specifies the minimum value that the counter can have. It also becomes the default. If not supplied, zero is assumed.
- Name
- Required:
True
- Type:
The name of the resource.
The name of the Counter. This is the name you will use in the variable expansion to reference the counter value.
- Wrap Counter
- Type:
If this value is specified, when the counter is incremented past the maximum and thus reset to the minimum, the counter specified on the
Wrap Counter (Dir->Counter)
is incremented. (This is currently not implemented).