NAME¶
backuppc - BackupPC manual
BackupPC Introduction¶
This documentation describes BackupPC version 3.2.1, released on 24 Apr 2011.
Overview¶
BackupPC is a high-performance, enterprise-grade system for backing up Unix,
Linux, WinXX, and MacOSX PCs, desktops and laptops to a server's disk.
BackupPC is highly configurable and easy to install and maintain.
Given the ever decreasing cost of disks and raid systems, it is now practical
and cost effective to backup a large number of machines onto a server's local
disk or network storage. For some sites this might be the complete backup
solution. For other sites additional permanent archives could be created by
periodically backing up the server to tape.
Features include:
- •
- A clever pooling scheme minimizes disk storage and disk
I/O. Identical files across multiple backups of the same or different PC
are stored only once (using hard links), resulting in substantial savings
in disk storage and disk writes.
- •
- Optional compression provides additional reductions in
storage (around 40%). The CPU impact of compression is low since only new
files (those not already in the pool) need to be compressed.
- •
- A powerful http/cgi user interface allows administrators to
view the current status, edit configuration, add/delete hosts, view log
files, and allows users to initiate and cancel backups and browse and
restore files from backups.
- •
- The http/cgi user interface has internationalization (i18n)
support, currently providing English, French, German, Spanish, Italian,
Dutch, Polish, Portuguese-Brazilian and Chinese
- •
- No client-side software is needed. On WinXX the standard
smb protocol is used to extract backup data. On linux, unix or MacOSX
clients, rsync, tar (over ssh/rsh/nfs) or ftp is used to extract backup
data. Alternatively, rsync can also be used on WinXX (using cygwin), and
Samba could be installed on the linux or unix client to provide smb
shares).
- •
- Flexible restore options. Single files can be downloaded
from any backup directly from the CGI interface. Zip or Tar archives for
selected files or directories from any backup can also be downloaded from
the CGI interface. Finally, direct restore to the client machine (using
smb or tar) for selected files or directories is also supported from the
CGI interface.
- •
- BackupPC supports mobile environments where laptops are
only intermittently connected to the network and have dynamic IP addresses
(DHCP). Configuration settings allow machines connected via slower WAN
connections (eg: dial up, DSL, cable) to not be backed up, even if they
use the same fixed or dynamic IP address as when they are connected
directly to the LAN.
- •
- Flexible configuration parameters allow multiple backups to
be performed in parallel, specification of which shares to backup, which
directories to backup or not backup, various schedules for full and
incremental backups, schedules for email reminders to users and so on.
Configuration parameters can be set system-wide or also on a per-PC
basis.
- •
- Users are sent periodic email reminders if their PC has not
recently been backed up. Email content, timing and policies are
configurable.
- •
- BackupPC is Open Source software hosted by
SourceForge.
Backup basics¶
- Full Backup
- A full backup is a complete backup of a share. BackupPC can
be configured to do a full backup at a regular interval (typically
weekly). BackupPC can be configured to keep a certain number of full
backups. Exponential expiry is also supported, allowing full backups with
various vintages to be kept (for example, a settable number of most recent
weekly fulls, plus a settable number of older fulls that are 2, 4, 8, or
16 weeks apart).
- Incremental Backup
- An incremental backup is a backup of files that have
changed since the last successful full or incremental backup. Starting in
BackupPC 3.0 multi-level incrementals are supported. A full backup has
level 0. A new incremental of level N will backup all files that have
changed since the most recent backup of a lower level. $Conf{IncrLevels}
is used to specify the level of each successive incremental. The default
value is all level 1, which makes the behavior the same as earlier
versions of BackupPC: each incremental will back up all the files that
changed since the last full (level 0).
For SMB and tar, BackupPC uses the modification time (mtime) to determine
which files have changed since the last lower-level backup. That means SMB
and tar incrementals are not able to detect deleted files, renamed files
or new files whose modification time is prior to the last lower-level
backup.
Rsync is more clever: any files whose attributes have changed (ie: uid, gid,
mtime, modes, size) since the last full are backed up. Deleted, new files
and renamed files are detected by Rsync incrementals.
BackupPC can also be configured to keep a certain number of incremental
backups, and to keep a smaller number of very old incremental backups. If
multi-level incrementals are specified then it is likely that more
incrementals will need to be kept since lower-level incrementals (and the
full backup) are needed to reconstruct a higher-level incremental.
BackupPC "fills-in" incremental backups when browsing or
restoring, based on the levels of each backup, giving every backup a
"full" appearance. This makes browsing and restoring backups
much easier: you can restore from any one backup independent of whether it
was an incremental or full.
- Partial Backup
- When a full backup fails or is canceled, and some files
have already been backed up, BackupPC keeps a partial backup containing
just the files that were backed up successfully. The partial backup is
removed when the next successful backup completes, or if another full
backup fails resulting in a newer partial backup. A failed full backup
that has not backed up any files, or any failed incremental backup, is
removed; no partial backup is saved in these cases.
The partial backup may be browsed or used to restore files just like a
successful full or incremental backup.
With the rsync transfer method the partial backup is used to resume the next
full backup, avoiding the need to retransfer the file data already in the
partial backup.
- Identical Files
- BackupPC pools identical files using hardlinks. By
"identical files" we mean files with identical contents, not
necessary the same permissions, ownership or modification time. Two files
might have different permissions, ownership, or modification time but will
still be pooled whenever the contents are identical. This is possible
since BackupPC stores the file meta-data (permissions, ownership, and
modification time) separately from the file contents.
- Backup Policy
- Based on your site's requirements you need to decide what
your backup policy is. BackupPC is not designed to provide exact
re-imaging of failed disks. See Limitations for more information. However,
the addition of tar transport for linux/unix clients, plus full support
for special file types and unix attributes in v1.4.0 likely means an exact
image of a linux/unix file system can be made.
BackupPC saves backups onto disk. Because of pooling you can relatively
economically keep several weeks of old backups.
At some sites the disk-based backup will be adequate, without a secondary
tape backup. This system is robust to any single failure: if a client disk
fails or loses files, the BackupPC server can be used to restore files. If
the server disk fails, BackupPC can be restarted on a fresh file system,
and create new backups from the clients. The chance of the server disk
failing can be made very small by spending more money on increasingly
better RAID systems. However, there is still the risk of catastrophic
events like fires or earthquakes that can destroy both the BackupPC server
and the clients it is backing up if they are physically nearby.
Some sites might choose to do periodic backups to tape or cd/dvd. This
backup can be done perhaps weekly using the archive function of BackupPC.
Other users have reported success with removable disks to rotate the
BackupPC data drives, or using rsync to mirror the BackupPC data pool
offsite.
Resources¶
- BackupPC home page
- The BackupPC Open Source project is hosted on SourceForge.
The home page can be found at:
http://backuppc.sourceforge.net
This page has links to the current documentation, the SourceForge project
page and general information.
- SourceForge project
- The SourceForge project page is at:
http://sourceforge.net/projects/backuppc
This page has links to the current releases of BackupPC.
- BackupPC Wiki
- BackupPC has a Wiki at
<http://backuppc.wiki.sourceforge.net>. Everyone is encouraged to
contribute to the Wiki. Anyone with a SourceForge account can edit the
Wiki.
The old FAQ is at <http://backuppc.sourceforge.net/faq>, but is
deprecated in favor of the Wiki.
- Mailing lists
- Three BackupPC mailing lists exist for announcements
(backuppc-announce), developers (backuppc-devel), and a general user list
for support, asking questions or any other topic relevant to BackupPC
(backuppc-users).
The lists are archived on SourceForge and Gmane. The SourceForge lists are
not always up to date and the searching is limited, so Gmane is a good
alternative. See:
http://news.gmane.org/index.php?prefix=gmane.comp.sysutils.backup.backuppc
http://sourceforge.net/mailarchive/forum.php?forum=backuppc-users
You can subscribe to these lists by visiting:
http://lists.sourceforge.net/lists/listinfo/backuppc-announce
http://lists.sourceforge.net/lists/listinfo/backuppc-users
http://lists.sourceforge.net/lists/listinfo/backuppc-devel
The backuppc-announce list is moderated and is used only for important
announcements (eg: new versions). It is low traffic. You only need to
subscribe to one of backuppc-announce and backuppc-users: backuppc-users
also receives any messages on backuppc-announce.
The backuppc-devel list is only for developers who are working on BackupPC.
Do not post questions or support requests there. But detailed technical
discussions should happen on this list.
To post a message to the backuppc-users list, send an email to
backuppc-users@lists.sourceforge.net
Do not send subscription requests to this address!
- Other Programs of Interest
- If you want to mirror linux or unix files or directories to
a remote server you should use rsync, <http://rsync.samba.org>.
BackupPC uses rsync as a transport mechanism; if you are already an rsync
user you can think of BackupPC as adding efficient storage (compression
and pooling) and a convenient user interface to rsync.
Two popular open source packages that do tape backup are Amanda
(<http://www.amanda.org>) and Bacula
(<http://www.bacula.org>). These packages can be used as complete
solutions, or also as back ends to BackupPC to backup the BackupPC server
data to tape.
Various programs and scripts use rsync to provide hardlinked backups. See,
for example, Mike Rubel's site
(<http://www.mikerubel.org/computers/rsync_snapshots>), JW Schultz's
dirvish (<http://www.dirvish.org/>), Ben Escoto's rdiff-backup
(http://www.nongnu.org/rdiff-backup
<http://www.nongnu.org/rdiff-backup>), and John Bowman's rlbackup
(<http://www.math.ualberta.ca/imaging/rlbackup>).
Unison is a utility that can do two-way, interactive, synchronization. See
<http://freshmeat.net/projects/unison>. An external wrapper around
rsync that maintains transfer data to enable two-way synchronization is
drsync; see <http://freshmeat.net/projects/drsync>.
BackupPC provides many additional features, such as compressed storage,
hardlinking any matching files (rather than just files with the same
name), and storing special files without root privileges. But these other
programs provide simple, effective and fast solutions and are definitely
worthy of consideration.
Road map¶
The new features planned for future releases of BackupPC are on the Wiki at
<
http://backuppc.wiki.sourceforge.net>.
Comments and suggestions are welcome.
You can help¶
BackupPC is free. I work on BackupPC because I enjoy doing it and I like to
contribute to the open source community.
BackupPC already has more than enough features for my own needs. The main
compensation for continuing to work on BackupPC is knowing that more and more
people find it useful. So feedback is certainly appreciated, both positive and
negative.
Beyond being a satisfied user and telling other people about it, everyone is
encouraged to add links to <
http://backuppc.sourceforge.net> (I'll see
them via Google) or otherwise publicize BackupPC. Unlike the commercial
products in this space, I have a zero budget (in both time and money) for
marketing, PR and advertising, so it's up to all of you! Feel free to vote for
BackupPC at <
http://freshmeat.net/projects/backuppc>.
Also, everyone is encouraged to contribute patches, bug reports, feature and
design suggestions, new code, Wiki additions (you can do those directly) and
documentation corrections or improvements. Answering questions on the mailing
list is a big help too.
Installing BackupPC¶
Requirements¶
BackupPC requires:
- •
- A linux, solaris, or unix based server with a substantial
amount of free disk space (see the next section for what that means). The
CPU and disk performance on this server will determine how many
simultaneous backups you can run. You should be able to run 4-8
simultaneous backups on a moderately configured server.
Several users have reported significantly better performance using reiserfs
compared to ext3 for the BackupPC data file system. It is also recommended
you consider either an LVM or RAID setup (either in HW or SW; eg: 3Ware
RAID10 or RAID5) so that you can expand the file system as necessary.
When BackupPC starts with an empty pool, all the backup data will be written
to the pool on disk. After more backups are done, a higher percentage of
incoming files will already be in the pool. BackupPC is able to avoid
writing to disk new files that are already in the pool. So over time disk
writes will reduce significantly (by perhaps a factor of 20 or more),
since eventually 95% or more of incoming backup files are typically in the
pool. Disk reads from the pool are still needed to do file compares to
verify files are an exact match. So, with a mature pool, if a relatively
fast client generates data at say 1MB/sec, and you run 4 simultaneous
backups, there will be an average server disk load of about 4MB/sec reads
and 0.2MB/sec writes (assuming 95% of the incoming files are in the pool).
These rates will be perhaps 40% lower if compression is on.
- •
- Perl version 5.8.0 or later. If you don't have perl, please
see <http://www.cpan.org>.
- •
- Perl modules Compress::Zlib, Archive::Zip and File::RsyncP.
Try "perldoc Compress::Zlib" and "perldoc
Archive::Zip" to see if you have these modules. If not, fetch them
from <http://www.cpan.org> and see the instructions below for how to
build and install them.
The File::RsyncP module is available from
<http://perlrsync.sourceforge.net> or CPAN. You'll need to install
the File::RsyncP module if you want to use Rsync as a transport
method.
- •
- If you are using smb to backup WinXX machines you need
smbclient and nmblookup from the samba package. You will also need
nmblookup if you are backing up linux/unix DHCP machines. See
<http://www.samba.org>. Samba versions 3.x are stable and now
recommended instead of 2.x.
See <http://www.samba.org> for source and binaries. It's pretty easy
to fetch and compile samba, and just grab smbclient and nmblookup, without
doing the installation. Alternatively, <http://www.samba.org> has
binary distributions for most platforms.
- •
- If you are using tar to backup linux/unix machines, those
machines should have version 1.13.7 at a minimum, with version 1.13.20 or
higher recommended. Use "tar --version" to check your version.
Various GNU mirrors have the newest versions of tar; see
<http://www.gnu.org/software/tar/>.
- •
- If you are using rsync to backup linux/unix machines you
should have version 2.6.3 or higher on each client machine. See
<http://rsync.samba.org>. Use "rsync --version" to check
your version.
For BackupPC to use Rsync you will also need to install the perl
File::RsyncP module, which is available from
<http://perlrsync.sourceforge.net>. Version 0.68 or later is
required.
- •
- The Apache web server, see <http://www.apache.org>,
preferably built with mod_perl support.
What type of storage space do I need?¶
BackupPC uses hardlinks to pool files common to different backups. Therefore
BackupPC's data store (__TOPDIR__) must point to a single file system that
supports hardlinks. You cannot split this file system with multiple mount
points or using symbolic links to point a sub-directory to a different file
system (it is ok to use a single symbolic link at the top-level directory
(__TOPDIR__) to point the entire data store somewhere else). You can of course
use any kind of RAID system or logical volume manager that combines the
capacity of multiple disks into a single, larger, file system. Such approaches
have the advantage that the file system can be expanded without having to copy
it.
Any standard linux or unix file system supports hardlinks. NFS mounted file
systems work too (provided the underlying file system supports hardlinks). But
windows based FAT and NTFS file systems will not work.
Starting with BackupPC 3.1.0, run-time checks are done at startup and at the
start of each backup to ensure that the file system can support hardlinks,
since this is a common area of configuration problems.
How much disk space do I need?¶
Here's one real example for an environment that is backing up 65 laptops with
compression off. Each full backup averages 3.2GB. Each incremental backup
averages about 0.2GB. Storing one full backup and two incremental backups per
laptop is around 240GB of raw data. But because of the pooling of identical
files, only 87GB is used. This is without compression.
Another example, with compression on: backing up 95 laptops, where each backup
averages 3.6GB and each incremental averages about 0.3GB. Keeping three weekly
full backups, and six incrementals is around 1200GB of raw data. Because of
pooling and compression, only 150GB is needed.
Here's a rule of thumb. Add up the disk usage of all the machines you want to
backup (210GB in the first example above). This is a rough minimum space
estimate that should allow a couple of full backups and at least half a dozen
incremental backups per machine. If compression is on you can reduce the
storage requirements by maybe 30-40%. Add some margin in case you add more
machines or decide to keep more old backups.
Your actual mileage will depend upon the types of clients, operating systems and
applications you have. The more uniform the clients and applications the
bigger the benefit from pooling common files.
For example, the Eudora email tool stores each mail folder in a separate file,
and attachments are extracted as separate files. So in the sadly common case
of a large attachment emailed to many recipients, Eudora will extract the
attachment into a new file. When these machines are backed up, only one copy
of the file will be stored on the server, even though the file appears in many
different full or incremental backups. In this sense Eudora is a
"friendly" application from the point of view of backup storage
requirements.
An example at the other end of the spectrum is Outlook. Everything (email
bodies, attachments, calendar, contact lists) is stored in a single file,
which often becomes huge. Any change to this file requires a separate copy of
the file to be saved during backup. Outlook is even more troublesome, since it
keeps this file locked all the time, so it cannot be read by smbclient
whenever Outlook is running. See the Limitations section for more discussion
of this problem.
In addition to total disk space, you should make sure you have plenty of inodes
on your BackupPC data partition. Some users have reported running out of
inodes on their BackupPC data partition. So even if you have plenty of disk
space, BackupPC will report failures when the inodes are exhausted. This is a
particular problem with ext2/ext3 file systems that have a fixed number of
inodes when the file system is built. Use "df -i" to see your inode
usage.
Step 1: Getting BackupPC¶
Some linux distributions now include BackupPC. The Debian distribution,
supported by Ludovic Drolez, can be found at
<
http://packages.debian.org/backuppc> and is included in the current
stable Debian release. On Debian, BackupPC can be installed with the command:
apt-get install backuppc
In the future there might be packages for Gentoo and other linux flavors. If the
packaged version is older than the released version then you may want to
install the latest version as described below.
Otherwise, manually fetching and installing BackupPC is easy. Start by
downloading the latest version from <
http://backuppc.sourceforge.net>.
Hit the "Code" button, then select the "backuppc" or
"backuppc-beta" package and download the latest version.
Step 2: Installing the distribution¶
Note: most information in this step is only relevant if you build and install
BackupPC yourself. If you use a package provided by a distribution, the
package management system should take of installing any needed dependencies.
First off, there are five perl modules you should install. These are all
optional, but highly recommended:
- Compress::Zlib
- To enable compression, you will need to install
Compress::Zlib from <http://www.cpan.org>. You can run "perldoc
Compress::Zlib" to see if this module is installed.
- Archive::Zip
- To support restore via Zip archives you will need to
install Archive::Zip, also from <http://www.cpan.org>. You can run
"perldoc Archive::Zip" to see if this module is installed.
- XML::RSS
- To support the RSS feature you will need to install
XML::RSS, also from <http://www.cpan.org>. There is not need to
install this module if you don't plan on using RSS. You can run
"perldoc XML::RSS" to see if this module is installed.
- File::RsyncP
- To use rsync and rsyncd with BackupPC you will need to
install File::RsyncP. You can run "perldoc File::RsyncP" to see
if this module is installed. File::RsyncP is available from
<http://perlrsync.sourceforge.net>. Version 0.68 or later is
required.
- File::Listing, Net::FTP, Net::FTP::RetrHandle,
Net::FTP::AutoReconnect
- To use ftp with BackupPC you will need four libraries, but
actually need to install only File::Listing from
<http://www.cpan.org>. You can run "perldoc File::Listing"
to see if this module is installed. Net::FTP is a standard module.
Net::FTP::RetrHandle and Net::FTP::AutoReconnect included in BackupPC
distribution.
To build and install these packages you should use the cpan program.
Alternatively, you can fetch the tar.gz file from <
http://www.cpan.org>
and then run these commands:
tar zxvf Archive-Zip-1.26.tar.gz
cd Archive-Zip-1.26
perl Makefile.PL
make
make test
make install
The same sequence of commands can be used for each module.
Now let's move onto BackupPC itself. After fetching BackupPC-3.2.1.tar.gz, run
these commands as root:
tar zxf BackupPC-3.2.1.tar.gz
cd BackupPC-3.2.1
perl configure.pl
In the future this release might also have patches available on the SourceForge
site. These patch files are text files, with a name of the form
BackupPC-3.2.1plN.diff
where N is the patch level, eg: pl2 is patch-level 2. These patch files are
cumulative: you only need apply the last patch file, not all the earlier patch
files. If a patch file is available, eg: BackupPC-3.2.1pl2.diff, you should
apply the patch after extracting the tar file:
# fetch BackupPC-3.2.1.tar.gz
# fetch BackupPC-3.2.1pl2.diff
tar zxf BackupPC-3.2.1.tar.gz
cd BackupPC-3.2.1
patch -p0 < ../BackupPC-3.2.1pl2.diff
perl configure.pl
A patch file includes comments that describe that bug fixes and changes. Feel
free to review it before you apply the patch.
The configure.pl script also accepts command-line options if you wish to run it
in a non-interactive manner. It has self-contained documentation for all the
command-line options, which you can read with perldoc:
perldoc configure.pl
Starting with BackupPC 3.0.0, the configure.pl script by default complies with
the file system hierarchy (FHS) conventions. The major difference compared to
earlier versions is that by default configuration files will be stored in
/etc/BackupPC rather than below the data directory, __TOPDIR__/conf, and the
log files will be stored in /var/log/BackupPC rather than below the data
directory, __TOPDIR__/log.
Note that distributions may choose to use different locations for BackupPC files
than these defaults.
If you are upgrading from an earlier version the configure.pl script will keep
the configuration files and log files in their original location.
When you run configure.pl you will be prompted for the full paths of various
executables, and you will be prompted for the following information.
- BackupPC User
- It is best if BackupPC runs as a special user, eg backuppc,
that has limited privileges. It is preferred that backuppc belongs to a
system administrator group so that sys admin members can browse BackupPC
files, edit the configuration files and so on. Although configurable, the
default settings leave group read permission on pool files, so make sure
the BackupPC user's group is chosen restrictively.
On this installation, this is __BACKUPPCUSER__.
For security purposes you might choose to configure the BackupPC user with
the shell set to /bin/false. Since you might need to run some BackupPC
programs as the BackupPC user for testing purposes, you can use the -s
option to su to explicitly run a shell, eg:
su -s /bin/bash __BACKUPPCUSER__
Depending upon your configuration you might also need the -l option.
- Data Directory
- You need to decide where to put the data directory, below
which all the BackupPC data is stored. This needs to be a big file system.
On this installation, this is __TOPDIR__.
- Install Directory
- You should decide where the BackupPC scripts, libraries and
documentation should be installed, eg: /usr/local/BackupPC.
On this installation, this is __INSTALLDIR__.
- CGI bin Directory
- You should decide where the BackupPC CGI script resides.
This will usually be below Apache's cgi-bin directory.
It is also possible to use a different directory and use Apache's
``<Directory>'' directive to specifiy that location. See the Apache
HTTP Server documentation for additional information.
On this installation, this is __CGIDIR__.
- Apache image Directory
- A directory where BackupPC's images are stored so that
Apache can serve them. You should ensure this directory is readable by
Apache and create a symlink to this directory from the BackupPC CGI bin
Directory.
- Config and Log Directories
- In this installation the configuration and log directories
are located in the following locations:
__CONFDIR__/config.pl main config file
__CONFDIR__/hosts hosts file
__CONFDIR__/pc/HOST.pl per-pc config file
__LOGDIR__/BackupPC log files, pid, status
The configure.pl script doesn't prompt for these locations but they can be
set for new installations using command-line options.
Step 3: Setting up config.pl¶
After running configure.pl, browse through the config file,
__CONFDIR__/config.pl, and make sure all the default settings are correct. In
particular, you will need to decide whether to use smb, tar,or rsync or ftp
transport (or whether to set it on a per-PC basis) and set the relevant
parameters for that transport method. See the section Client Setup for more
details.
Step 4: Setting up the hosts file¶
The file __CONFDIR__/hosts contains the list of clients to backup. BackupPC
reads this file in three cases:
- •
- Upon startup.
- •
- When BackupPC is sent a HUP (-1) signal. Assuming you
installed the init.d script, you can also do this with
"/etc/init.d/backuppc reload".
- •
- When the modification time of the hosts file changes.
BackupPC checks the modification time once during each regular
wakeup.
Whenever you change the hosts file (to add or remove a host) you can either do a
kill -HUP BackupPC_pid or simply wait until the next regular wakeup period.
Each line in the hosts file contains three fields, separated by white space:
- Host name
- This is typically the host name or NetBios name of the
client machine and should be in lower case. The host name can contain
spaces (escape with a backslash), but it is not recommended.
Please read the section How BackupPC Finds Hosts.
In certain cases you might want several distinct clients to refer to the
same physical machine. For example, you might have a database you want to
backup, and you want to bracket the backup of the database with
shutdown/restart using $Conf{DumpPreUserCmd} and $Conf{DumpPostUserCmd}.
But you also want to backup the rest of the machine while the database is
still running. In the case you can specify two different clients in the
host file, using any mnemonic name (eg: myhost_mysql and myhost), and use
$Conf{ClientNameAlias} in myhost_mysql's config.pl to specify the real
host name of the machine.
- DHCP flag
- Starting with v2.0.0 the way hosts are discovered has
changed and now in most cases you should specify 0 for the DHCP flag, even
if the host has a dynamically assigned IP address. Please read the section
How BackupPC Finds Hosts to understand whether you need to set the DHCP
flag.
You only need to set DHCP to 1 if your client machine doesn't respond to the
NetBios multicast request:
nmblookup myHost
but does respond to a request directed to its IP address:
nmblookup -A W.X.Y.Z
If you do set DHCP to 1 on any client you will need to specify the range of
DHCP addresses to search is specified in $Conf{DHCPAddressRanges}.
Note also that the $Conf{ClientNameAlias} feature does not work for clients
with DHCP set to 1.
- User name
- This should be the unix login/email name of the user who
"owns" or uses this machine. This is the user who will be sent
email about this machine, and this user will have permission to
stop/start/browse/restore backups for this host. Leave this blank if no
specific person should receive email or be allowed to
stop/start/browse/restore backups for this host. Administrators will still
have full permissions.
- More users
- Additional user names, separate by commas and with no white
space, can be specified. These users will also have full permission in the
CGI interface to stop/start/browse/restore backups for this host. These
users will not be sent email about this host.
The first non-comment line of the hosts file is special: it contains the names
of the columns and should not be edited.
Here's a simple example of a hosts file:
host dhcp user moreUsers
farside 0 craig jim,dave
larson 1 gary andy
Step 5: Client Setup¶
Four methods for getting backup data from a client are supported: smb, tar,
rsync and ftp. Smb or rsync are the preferred methods for WinXX clients and
rsync or tar are the preferred methods for linux/unix/MacOSX clients.
The transfer method is set using the $Conf{XferMethod} configuration setting. If
you have a mixed environment (ie: you will use smb for some clients and tar
for others), you will need to pick the most common choice for
$Conf{XferMethod} for the main config.pl file, and then override it in the
per-PC config file for those hosts that will use the other method. (Or you
could run two completely separate instances of BackupPC, with different data
directories, one for WinXX and the other for linux/unix, but then common files
between the different machine types will duplicated.)
Here are some brief client setup notes:
- WinXX
- One setup for WinXX clients is to set $Conf{XferMethod} to
"smb". Actually, rsyncd is the better method for WinXX if you
are prepared to run rsync/cygwin on your WinXX client.
If you want to use rsyncd for WinXX clients you can find a pre-packaged zip
file on <http://backuppc.sourceforge.net>. The package is called
cygwin-rsync. It contains rsync.exe, template setup files and the minimal
set of cygwin libraries for everything to run. The README file contains
instructions for running rsync as a service, so it starts automatically
everytime you boot your machine. If you use rsync to backup WinXX
machines, be sure to set $Conf{ClientCharset} correctly (eg: 'cp1252') so
that the WinXX file name encoding is correctly converted to utf8.
Otherwise, to use SMB, you can either create shares for the data you want to
backup or your can use the existing C$ share. To create a new share, open
"My Computer", right click on the drive (eg: C), and select
"Sharing..." (or select "Properties" and select the
"Sharing" tab). In this dialog box you can enable sharing,
select the share name and permissions.
All Windows NT based OS (NT, 2000, XP Pro), are configured by default to
share the entire C drive as C$. This is a special share used for various
administration functions, one of which is to grant access to backup
operators. All you need to do is create a new domain user, specifically
for backup. Then add the new backup user to the built in "Backup
Operators" group. You now have backup capability for any directory on
any computer in the domain in one easy step. This avoids using
administrator accounts and only grants permission to do exactly what you
want for the given user, i.e.: backup. Also, for additional security, you
may wish to deny the ability for this user to logon to computers in the
default domain policy.
If this machine uses DHCP you will also need to make sure the NetBios name
is set. Go to Control Panel|System|Network Identification (on Win2K) or
Control Panel|System|Computer Name (on WinXP). Also, you should go to
Control Panel|Network Connections|Local Area
Connection|Properties|Internet Protocol (TCP/IP)|Properties|Advanced|WINS
and verify that NetBios is not disabled.
The relevant configuration settings are $Conf{SmbShareName},
$Conf{SmbShareUserName}, $Conf{SmbSharePasswd}, $Conf{SmbClientPath},
$Conf{SmbClientFullCmd}, $Conf{SmbClientIncrCmd} and
$Conf{SmbClientRestoreCmd}.
BackupPC needs to know the smb share user name and password for a client
machine that uses smb. The user name is specified in
$Conf{SmbShareUserName}. There are four ways to tell BackupPC the smb
share password:
- •
- As an environment variable BPC_SMB_PASSWD set before
BackupPC starts. If you start BackupPC manually the BPC_SMB_PASSWD
variable must be set manually first. For backward compatibility for v1.5.0
and prior, the environment variable PASSWD can be used if BPC_SMB_PASSWD
is not set. Warning: on some systems it is possible to see environment
variables of running processes.
- •
- Alternatively the BPC_SMB_PASSWD setting can be included in
/etc/init.d/backuppc, in which case you must make sure this file is not
world (other) readable.
- •
- As a configuration variable $Conf{SmbSharePasswd} in
__CONFDIR__/config.pl. If you put the password here you must make sure
this file is not world (other) readable.
- •
- As a configuration variable $Conf{SmbSharePasswd} in the
per-PC configuration file (__CONFDIR__/pc/$host.pl or
__TOPDIR__/pc/$host/config.pl in non-FHS versions of BackupPC). You will
have to use this option if the smb share password is different for each
host. If you put the password here you must make sure this file is not
world (other) readable.
Placement and protection of the smb share password is a possible security risk,
so please double-check the file and directory permissions. In a future version
there might be support for encryption of this password, but a private key will
still have to be stored in a protected place. Suggestions are welcome.
As an alternative to setting $Conf{XferMethod} to "smb" (using
smbclient) for WinXX clients, you can use an smb network filesystem (eg:
ksmbfs or similar) on your linux/unix server to mount the share, and then set
$Conf{XferMethod} to "tar" (use tar on the network mounted file
system).
Also, to make sure that file names with special characters are correctly
transferred by smbclient you should make sure that the smb.conf file has (for
samba 3.x):
[global]
unix charset = UTF8
UTF8 is the default setting, so if the parameter is missing then it is ok. With
this setting $Conf{ClientCharset} should be emtpy, since smbclient has already
converted the file names to utf8.
- Linux/Unix
- The preferred setup for linux/unix clients is to set
$Conf{XferMethod} to "rsync", "rsyncd" or
"tar".
You can use either rsync, smb, or tar for linux/unix machines. Smb requires
that the Samba server (smbd) be run to provide the shares. Since the smb
protocol can't represent special files like symbolic links and fifos, tar
and rsync are the better transport methods for linux/unix machines. (In
fact, by default samba makes symbolic links look like the file or
directory that they point to, so you could get an infinite loop if a
symbolic link points to the current or parent directory. If you really
need to use Samba shares for linux/unix backups you should turn off the
"follow symlinks" samba config setting. See the smb.conf manual
page.)
The requirements for each Xfer Method are:
- tar
- You must have GNU tar on the client machine. Use "tar
--version" or "gtar --version" to verify. The version
should be at least 1.13.7, and 1.13.20 or greater is recommended. Tar is
run on the client machine via rsh or ssh.
The relevant configuration settings are $Conf{TarClientPath},
$Conf{TarShareName}, $Conf{TarClientCmd}, $Conf{TarFullArgs},
$Conf{TarIncrArgs}, and $Conf{TarClientRestoreCmd}.
- rsync
- You should have at least rsync 2.6.3, and the latest
version is recommended. Rsync is run on the remote client via rsh or ssh.
The relevant configuration settings are $Conf{RsyncClientPath},
$Conf{RsyncClientCmd}, $Conf{RsyncClientRestoreCmd},
$Conf{RsyncShareName}, $Conf{RsyncArgs}, and $Conf{RsyncRestoreArgs}.
- rsyncd
- You should have at least rsync 2.6.3, and the latest
version is recommended. In this case the rsync daemon should be running on
the client machine and BackupPC connects directly to it.
The relevant configuration settings are $Conf{RsyncdClientPort},
$Conf{RsyncdUserName}, $Conf{RsyncdPasswd}, $Conf{RsyncdAuthRequired},
$Conf{RsyncShareName}, $Conf{RsyncArgs}, $Conf{RsyncArgsExtra}, and
$Conf{RsyncRestoreArgs}. $Conf{RsyncShareName} is the name of an rsync
module (ie: the thing in square brackets in rsyncd's conf file -- see
rsyncd.conf), not a file system path.
Be aware that rsyncd will remove the leading '/' from path names in symbolic
links if you specify "use chroot = no" in the rsynd.conf file.
See the rsyncd.conf manual page for more information.
- ftp
- You need to be running an ftp server on the client machine.
The relevant configuration settings are $Conf{FtpShareName},
$Conf{FtpUserName}, $Conf{FtpPasswd}, $Conf{FtpBlockSize}, $Conf{FtpPort},
$Conf{FtpTimeout}, and $Conf{FtpFollowSymlinks}.
You need to set $Conf{ClientCharset} to the client's charset so that file names
are correctly converted to utf8. Use "locale charmap" on the client
to see its charset.
For linux/unix machines you should not backup "/proc". This directory
contains a variety of files that look like regular files but they are special
files that don't need to be backed up (eg: /proc/kcore is a regular file that
contains physical memory). See $Conf{BackupFilesExclude}. It is safe to back
up /dev since it contains mostly character-special and block-special files,
which are correctly handed by BackupPC (eg: backing up /dev/hda5 just saves
the block-special file information, not the contents of the disk).
Alternatively, rather than backup all the file systems as a single share
("/"), it is easier to restore a single file system if you backup
each file system separately. To do this you should list each file system mount
point in $Conf{TarShareName} or $Conf{RsyncShareName}, and add the
--one-file-system option to $Conf{TarClientCmd} or $Conf{RsyncArgs}. In this
case there is no need to exclude /proc explicitly since it looks like a
different file system.
Next you should decide whether to run tar over ssh, rsh or nfs. Ssh is the
preferred method. Rsh is not secure and therefore not recommended. Nfs will
work, but you need to make sure that the BackupPC user (running on the server)
has sufficient permissions to read all the files below the nfs mount.
Ssh allows BackupPC to run as a privileged user on the client (eg: root), since
it needs sufficient permissions to read all the backup files. Ssh is setup so
that BackupPC on the server (an otherwise low privileged user) can ssh as root
on the client, without being prompted for a password. There are two common
versions of ssh: v1 and v2. Here are some instructions for one way to setup
ssh. (Check which version of SSH you have by typing "ssh" or
"man ssh".)
- MacOSX
- In general this should be similar to Linux/Unix machines.
In versions 10.4 and later, the native MacOSX tar works, and also supports
resource forks. xtar is another option, and rsync works too (although the
MacOSX-supplied rsync has an extension for extended attributes that is not
compatible with standard rsync).
- SSH Setup
- SSH is a secure way to run tar or rsync on a backup client
to extract the data. SSH provides strong authentication and encryption of
the network data.
Note that if you run rsyncd (rsync daemon), ssh is not used. In this case,
rsyncd provides its own authentication, but there is no encryption of
network data. If you want encryption of network data you can use ssh to
create a tunnel, or use a program like stunnel.
Setup instructions for ssh can be found at
<http://backuppc.sourceforge.net/faq/ssh.html> or on the Wiki.
- Clients that use DHCP
- If a client machine uses DHCP BackupPC needs some way to
find the IP address given the host name. One alternative is to set dhcp to
1 in the hosts file, and BackupPC will search a pool of IP addresses
looking for hosts. More efficiently, it is better to set dhcp = 0 and
provide a mechanism for BackupPC to find the IP address given the host
name.
For WinXX machines BackupPC uses the NetBios name server to determine the IP
address given the host name. For unix machines you can run nmbd (the
NetBios name server) from the Samba distribution so that the machine
responds to a NetBios name request. See the manual page and Samba
documentation for more information.
Alternatively, you can set $Conf{NmbLookupFindHostCmd} to any command that
returns the IP address given the host name.
Please read the section How BackupPC Finds Hosts for more details.
Step 6: Running BackupPC¶
The installation contains an init.d backuppc script that can be copied to
/etc/init.d so that BackupPC can auto-start on boot. See init.d/README for
further instructions.
BackupPC should be ready to start. If you installed the init.d script, then you
should be able to run BackupPC with:
/etc/init.d/backuppc start
(This script can also be invoked with "stop" to stop BackupPC and
"reload" to tell BackupPC to reload config.pl and the hosts file.)
Otherwise, just run
__INSTALLDIR__/bin/BackupPC -d
as user __BACKUPPCUSER__. The -d option tells BackupPC to run as a daemon (ie:
it does an additional fork).
Any immediate errors will be printed to stderr and BackupPC will quit.
Otherwise, look in __LOGDIR__/LOG and verify that BackupPC reports it has
started and all is ok.
Step 7: Talking to BackupPC¶
You should verify that BackupPC is running by using BackupPC_serverMesg. This
sends a message to BackupPC via the unix (or TCP) socket and prints the
response. Like all BackupPC programs, BackupPC_serverMesg should be run as the
BackupPC user (__BACKUPPCUSER__), so you should
su __BACKUPPCUSER__
before running BackupPC_serverMesg. If the BackupPC user is configured with
/bin/false as the shell, you can use the -s option to su to explicitly run a
shell, eg:
su -s /bin/bash __BACKUPPCUSER__
Depending upon your configuration you might also need the -l option.
You can request status information and start and stop backups using this
interface. This socket interface is mainly provided for the CGI interface (and
some of the BackupPC sub-programs use it too). But right now we just want to
make sure BackupPC is happy. Each of these commands should produce some status
output:
__INSTALLDIR__/bin/BackupPC_serverMesg status info
__INSTALLDIR__/bin/BackupPC_serverMesg status jobs
__INSTALLDIR__/bin/BackupPC_serverMesg status hosts
The output should be some hashes printed with Data::Dumper. If it looks cryptic
and confusing, and doesn't look like an error message, then all is ok.
The jobs status should initially show just BackupPC_trashClean. The hosts status
should produce a list of every host you have listed in __CONFDIR__/hosts as
part of a big cryptic output line.
You can also request that all hosts be queued:
__INSTALLDIR__/bin/BackupPC_serverMesg backup all
At this point you should make sure the CGI interface works since it will be much
easier to see what is going on. That's our next subject.
Step 8: Checking email delivery¶
The script BackupPC_sendEmail sends status and error emails to the administrator
and users. It is usually run each night by BackupPC_nightly.
To verify that it can run sendmail and deliver email correctly you should ask it
to send a test email to you:
su __BACKUPPCUSER__
__INSTALLDIR__/bin/BackupPC_sendEmail -u MYNAME@MYDOMAIN.COM
BackupPC_sendEmail also takes a -c option that checks if BackupPC is running,
and it sends an email to $Conf{EMailAdminUserName} if it is not. That can be
used as a keep-alive check by adding
__INSTALLDIR__/bin/BackupPC_sendEmail -c
to __BACKUPPCUSER__'s cron.
The -t option to BackupPC_sendEmail causes it to print the email message instead
of invoking sendmail to deliver the message.
Step 9: CGI interface¶
The CGI interface script, BackupPC_Admin, is a powerful and flexible way to see
and control what BackupPC is doing. It is written for an Apache server. If you
don't have Apache, see <
http://www.apache.org>.
There are two options for setting up the CGI interface: standard mode and using
mod_perl. Mod_perl provides much higher performance (around 15x) and is the
best choice if your Apache was built with mod_perl support. To see if your
apache was built with mod_perl run this command:
httpd -l | egrep mod_perl
If this prints mod_perl.c then your Apache supports mod_perl.
Note: on some distributions (like Debian) the command is not ``httpd'', but
``apache'' or ``apache2''. Those distributions will generally also use
``apache'' for the Apache user account and configuration files.
Using mod_perl with BackupPC_Admin requires a dedicated Apache to be run as the
BackupPC user (__BACKUPPCUSER__). This is because BackupPC_Admin needs
permission to access various files in BackupPC's data directories. In
contrast, the standard installation (without mod_perl) solves this problem by
having BackupPC_Admin installed as setuid to the BackupPC user, so that
BackupPC_Admin runs as the BackupPC user.
Here are some specifics for each setup:
- Standard Setup
- The CGI interface should have been installed by the
configure.pl script in __CGIDIR__/BackupPC_Admin. BackupPC_Admin should
have been installed as setuid to the BackupPC user (__BACKUPPCUSER__), in
addition to user and group execute permission.
You should be very careful about permissions on BackupPC_Admin and the
directory __CGIDIR__: it is important that normal users cannot directly
execute or change BackupPC_Admin, otherwise they can access backup files
for any PC. You might need to change the group ownership of BackupPC_Admin
to a group that Apache belongs to so that Apache can execute it (don't add
"other" execute permission!). The permissions should look like
this:
ls -l __CGIDIR__/BackupPC_Admin
-swxr-x--- 1 __BACKUPPCUSER__ web 82406 Jun 17 22:58 __CGIDIR__/BackupPC_Admin
The setuid script won't work unless perl on your machine was installed with
setuid emulation. This is likely the problem if you get an error saying
such as "Wrong user: my userid is 25, instead of 150", meaning
the script is running as the httpd user, not the BackupPC user. This is
because setuid scripts are disabled by the kernel in most flavors of unix
and linux.
To see if your perl has setuid emulation, see if there is a program called
sperl5.8.0 (or sperl5.8.2 etc, based on your perl version) in the place
where perl is installed. If you can't find this program, then you have two
options: rebuild and reinstall perl with the setuid emulation turned on
(answer "y" to the question "Do you want to do
setuid/setgid emulation?" when you run perl's configure script), or
switch to the mod_perl alternative for the CGI script (which doesn't need
setuid to work).
- Mod_perl Setup
- The advantage of the mod_perl setup is that no setuid
script is needed, and there is a huge performance advantage. Not only does
all the perl code need to be parsed just once, the config.pl and hosts
files, plus the connection to the BackupPC server are cached between
requests. The typical speedup is around 15 times.
To use mod_perl you need to run Apache as user __BACKUPPCUSER__. If you need
to run multiple Apache's for different services then you need to create
multiple top-level Apache directories, each with their own config file.
You can make copies of /etc/init.d/httpd and use the -d option to httpd to
point each http to a different top-level directory. Or you can use the -f
option to explicitly point to the config file. Multiple Apache's will run
on different Ports (eg: 80 is standard, 8080 is a typical alternative port
accessed via http://yourhost.com:8080).
Inside BackupPC's Apache http.conf file you should check the settings for
ServerRoot, DocumentRoot, User, Group, and Port. See
http://httpd.apache.org/docs/server-wide.html
<http://httpd.apache.org/docs/server-wide.html> for more details.
For mod_perl, BackupPC_Admin should not have setuid permission, so you
should turn it off:
chmod u-s __CGIDIR__/BackupPC_Admin
To tell Apache to use mod_perl to execute BackupPC_Admin, add this to
Apache's 1.x httpd.conf file:
<IfModule mod_perl.c>
PerlModule Apache::Registry
PerlTaintCheck On
<Location /cgi-bin/BackupPC/BackupPC_Admin> # <--- change path as needed
SetHandler perl-script
PerlHandler Apache::Registry
Options ExecCGI
PerlSendHeader On
</Location>
</IfModule>
Apache 2.0.44 with Perl 5.8.0 on RedHat 7.1, Don Silvia reports that this
works (with tweaks from Michael Tuzi):
LoadModule perl_module modules/mod_perl.so
PerlModule Apache2
<Directory /path/to/cgi/>
SetHandler perl-script
PerlResponseHandler ModPerl::Registry
PerlOptions +ParseHeaders
Options +ExecCGI
Order deny,allow
Deny from all
Allow from 192.168.0
AuthName "Backup Admin"
AuthType Basic
AuthUserFile /path/to/user_file
Require valid-user
</Directory>
There are other optimizations and options with mod_perl. For example, you
can tell mod_perl to preload various perl modules, which saves memory
compared to loading separate copies in every Apache process after they are
forked. See Stas's definitive mod_perl guide at
<http://perl.apache.org/guide>.
BackupPC_Admin requires that users are authenticated by Apache. Specifically, it
expects that Apache sets the REMOTE_USER environment variable when it runs.
There are several ways to do this. One way is to create a .htaccess file in
the cgi-bin directory that looks like:
AuthGroupFile /etc/httpd/conf/group # <--- change path as needed
AuthUserFile /etc/http/conf/passwd # <--- change path as needed
AuthType basic
AuthName "access"
require valid-user
You will also need "AllowOverride Indexes AuthConfig" in the Apache
httpd.conf file to enable the .htaccess file. Alternatively, everything can go
in the Apache httpd.conf file inside a Location directive. The list of users
and password file above can be extracted from the NIS passwd file.
One alternative is to use LDAP. In Apache's http.conf add these lines:
LoadModule auth_ldap_module modules/auth_ldap.so
AddModule auth_ldap.c
# cgi-bin - auth via LDAP (for BackupPC)
<Location /cgi-binBackupPC/BackupPC_Admin> # <--- change path as needed
AuthType Basic
AuthName "BackupPC login"
# replace MYDOMAIN, PORT, ORG and CO as needed
AuthLDAPURL ldap://ldap.MYDOMAIN.com:PORT/o=ORG,c=CO?uid?sub?(objectClass=*)
require valid-user
</Location>
If you want to disable the user authentication you can set $Conf{CgiAdminUsers}
to '*', which allows any user to have full access to all hosts and backups. In
this case the REMOTE_USER environment variable does not have to be set by
Apache.
Alternatively, you can force a particular user name by getting Apache to set
REMOTE_USER, eg, to hardcode the user to www you could add this to Apache's
httpd.conf:
<Location /cgi-bin/BackupPC/BackupPC_Admin> # <--- change path as needed
Setenv REMOTE_USER www
</Location>
Finally, you should also edit the config.pl file and adjust, as necessary, the
CGI-specific settings. They're near the end of the config file. In particular,
you should specify which users or groups have administrator (privileged)
access: see the config settings $Conf{CgiAdminUserGroup} and
$Conf{CgiAdminUsers}. Also, the configure.pl script placed various images into
$Conf{CgiImageDir} that BackupPC_Admin needs to serve up. You should make sure
that $Conf{CgiImageDirURL} is the correct URL for the image directory.
See the section Fixing installation problems for suggestions on debugging the
Apache authentication setup.
How BackupPC Finds Hosts¶
Starting with v2.0.0 the way hosts are discovered has changed. In most cases you
should specify 0 for the DHCP flag in the conf/hosts file, even if the host
has a dynamically assigned IP address.
BackupPC (starting with v2.0.0) looks up hosts with DHCP = 0 in this manner:
- •
- First DNS is used to lookup the IP address given the
client's name using perl's gethostbyname() function. This should
succeed for machines that have fixed IP addresses that are known via DNS.
You can manually see whether a given host have a DNS entry according to
perl's gethostbyname function with this command:
perl -e 'print(gethostbyname("myhost") ? "ok\n" : "not found\n");'
- •
- If gethostbyname() fails, BackupPC then attempts a
NetBios multicast to find the host. Provided your client machine is
configured properly, it should respond to this NetBios multicast request.
Specifically, BackupPC runs a command of this form:
nmblookup myhost
If this fails you will see output like:
querying myhost on 10.10.255.255
name_query failed to find name myhost
If it is successful you will see output like:
querying myhost on 10.10.255.255
10.10.1.73 myhost<00>
Depending on your netmask you might need to specify the -B option to
nmblookup. For example:
nmblookup -B 10.10.1.255 myhost
If necessary, experiment with the nmblookup command which will return the IP
address of the client given its name. Then update
$Conf{NmbLookupFindHostCmd} with any necessary options to nmblookup.
For hosts that have the DHCP flag set to 1, these machines are discovered as
follows:
- •
- A DHCP address pool ($Conf{DHCPAddressRanges}) needs to be
specified. BackupPC will check the NetBIOS name of each machine in the
range using a command of the form:
nmblookup -A W.X.Y.Z
where W.X.Y.Z is each candidate address from $Conf{DHCPAddressRanges}. Any
host that has a valid NetBIOS name returned by this command (ie: matching
an entry in the hosts file) will be backed up. You can modify the specific
nmblookup command if necessary via $Conf{NmbLookupCmd}.
- •
- You only need to use this DHCP feature if your client
machine doesn't respond to the NetBios multicast request:
nmblookup myHost
but does respond to a request directed to its IP address:
nmblookup -A W.X.Y.Z
Other installation topics¶
- Removing a client
- If there is a machine that no longer needs to be backed up
(eg: a retired machine) you have two choices. First, you can keep the
backups accessible and browsable, but disable all new backups.
Alternatively, you can completely remove the client and all its backups.
To disable backups for a client $Conf{BackupsDisable} can be set to two
different values in that client's per-PC config.pl file:
- 1.
- Don't do any regular backups on this machine. Manually
requested backups (via the CGI interface) will still occur.
- 2.
- Don't do any backups on this machine. Manually requested
backups (via the CGI interface) will be ignored.
This will still allow the client's old backups to be browsable and restorable.
To completely remove a client and all its backups, you should remove its entry
in the conf/hosts file, and then delete the __TOPDIR__/pc/$host directory.
Whenever you change the hosts file, you should send BackupPC a HUP (-1) signal
so that it re-reads the hosts file. If you don't do this, BackupPC will
automatically re-read the hosts file at the next regular wakeup.
Note that when you remove a client's backups you won't initially recover much
disk space. That's because the client's files are still in the pool.
Overnight, when BackupPC_nightly next runs, all the unused pool files will be
deleted and this will recover the disk space used by the client's
backups.
- Copying the pool
- If the pool disk requirements grow you might need to copy
the entire data directory to a new (bigger) file system. Hopefully you are
lucky enough to avoid this by having the data directory on a RAID file
system or LVM that allows the capacity to be grown in place by adding
disks.
The backup data directories contain large numbers of hardlinks. If you try
to copy the pool the target directory will occupy a lot more space if the
hardlinks aren't re-established.
The best way to copy a pool file system, if possible, is by copying the raw
device at the block level (eg: using dd). Application level programs that
understand hardlinks include the GNU cp program with the -a option and
rsync -H. However, the large number of hardlinks in the pool will make the
memory usage large and the copy very slow. Don't forget to stop BackupPC
while the copy runs.
Starting in 3.0.0 a new script bin/BackupPC_tarPCCopy can be used to assist
the copy process. Given one or more pc paths (eg: TOPDIR/pc/HOST or
TOPDIR/pc/HOST/nnn), BackupPC_tarPCCopy creates a tar archive with all the
hardlinks pointing to ../cpool/.... Any files not hardlinked (eg: backups,
LOG etc) are included verbatim.
You will need to specify the -P option to tar when you extract the archive
generated by BackupPC_tarPCCopy since the hardlink targets are outside of
the directory being extracted.
To copy a complete store (ie: __TOPDIR__) using BackupPC_tarPCCopy you
should:
- •
- stop BackupPC so that the store is static.
- •
- copy the cpool, conf and log directory trees using any
technique (like cp, rsync or tar) without the need to preserve
hardlinks.
- •
- copy the pc directory using BackupPC_tarPCCopy:
su __BACKUPPCUSER__
cd NEW_TOPDIR
mkdir pc
cd pc
__INSTALLDIR__/bin/BackupPC_tarPCCopy __TOPDIR__/pc | tar xvPf -
Fixing installation problems¶
Please see the Wiki at <
http://backuppc.wiki.sourceforge.net> for
debugging suggestions. If you find a solution to your problem that could help
other users please add it to the Wiki!
Restore functions¶
BackupPC supports several different methods for restoring files. The most
convenient restore options are provided via the CGI interface. Alternatively,
backup files can be restored using manual commands.
CGI restore options¶
By selecting a host in the CGI interface, a list of all the backups for that
machine will be displayed. By selecting the backup number you can navigate the
shares and directory tree for that backup.
BackupPC's CGI interface automatically fills incremental backups with the
corresponding full backup, which means each backup has a filled appearance.
Therefore, there is no need to do multiple restores from the incremental and
full backups: BackupPC does all the hard work for you. You simply select the
files and directories you want from the correct backup vintage in one step.
You can download a single backup file at any time simply by selecting it. Your
browser should prompt you with the file name and ask you whether to open the
file or save it to disk.
Alternatively, you can select one or more files or directories in the currently
selected directory and select "Restore selected files". (If you need
to restore selected files and directories from several different parent
directories you will need to do that in multiple steps.)
If you select all the files in a directory, BackupPC will replace the list of
files with the parent directory. You will be presented with a screen that has
three options:
- Option 1: Direct Restore
- With this option the selected files and directories are
restored directly back onto the host, by default in their original
location. Any old files with the same name will be overwritten, so use
caution. You can optionally change the target host name, target share
name, and target path prefix for the restore, allowing you to restore the
files to a different location.
Once you select "Start Restore" you will be prompted one last time
with a summary of the exact source and target files and directories before
you commit. When you give the final go ahead the restore operation will be
queued like a normal backup job, meaning that it will be deferred if there
is a backup currently running for that host. When the restore job is run,
smbclient, tar, rsync or rsyncd is used (depending upon $Conf{XferMethod})
to actually restore the files. Sorry, there is currently no option to
cancel a restore that has been started. Currently ftp restores are not
fully implemented.
A record of the restore request, including the result and list of files and
directories, is kept. It can be browsed from the host's home page.
$Conf{RestoreInfoKeepCnt} specifies how many old restore status files to
keep.
Note that for direct restore to work, the $Conf{XferMethod} must be able to
write to the client. For example, that means an SMB share for smbclient
needs to be writable, and the rsyncd module needs "read only"
set to "false". This creates additional security risks. If you
only create read-only SMB shares (which is a good idea), then the direct
restore will fail. You can disable the direct restore option by setting
$Conf{SmbClientRestoreCmd}, $Conf{TarClientRestoreCmd} and
$Conf{RsyncRestoreArgs} to undef.
- Option 2: Download Zip archive
- With this option a zip file containing the selected files
and directories is downloaded. The zip file can then be unpacked or
individual files extracted as necessary on the host machine. The
compression level can be specified. A value of 0 turns off compression.
When you select "Download Zip File" you should be prompted where
to save the restore.zip file.
BackupPC does not consider downloading a zip file as an actual restore
operation, so the details are not saved for later browsing as in the first
case. However, a mention that a zip file was downloaded by a particular
user, and a list of the files, does appear in BackupPC's log file.
- Option 3: Download Tar archive
- This is identical to the previous option, except a tar file
is downloaded rather than a zip file (and there is currently no
compression option).
Command-line restore options¶
Apart from the CGI interface, BackupPC allows you to restore files and
directories from the command line. The following programs can be used:
- BackupPC_zcat
- For each file name argument it inflates (uncompresses) the
file and writes it to stdout. To use BackupPC_zcat you could give it the
full file name, eg:
__INSTALLDIR__/bin/BackupPC_zcat __TOPDIR__/pc/host/5/fc/fcraig/fexample.txt > example.txt
It's your responsibility to make sure the file is really compressed:
BackupPC_zcat doesn't check which backup the requested file is from.
BackupPC_zcat returns a non-zero status if it fails to uncompress a
file.
- BackupPC_tarCreate
- BackupPC_tarCreate creates a tar file for any files or
directories in a particular backup. Merging of incrementals is done
automatically, so you don't need to worry about whether certain files
appear in the incremental or full backup.
The usage is:
BackupPC_tarCreate [options] files/directories...
Required options:
-h host host from which the tar archive is created
-n dumpNum dump number from which the tar archive is created
A negative number means relative to the end (eg -1
means the most recent dump, -2 2nd most recent etc).
-s shareName share name from which the tar archive is created
Other options:
-t print summary totals
-r pathRemove path prefix that will be replaced with pathAdd
-p pathAdd new path prefix
-b BLOCKS BLOCKS x 512 bytes per record (default 20; same as tar)
-w writeBufSz write buffer size (default 1048576 = 1MB)
-e charset charset for encoding file names (default: value of
$Conf{ClientCharset} when backup was done)
-l just print a file listing; don't generate an archive
-L just print a detailed file listing; don't generate an archive
The command-line files and directories are relative to the specified
shareName. The tar file is written to stdout.
The -h, -n and -s options specify which dump is used to generate the tar
archive. The -r and -p options can be used to relocate the paths in the
tar archive so extracted files can be placed in a location different from
their original location.
- BackupPC_zipCreate
- BackupPC_zipCreate creates a zip file for any files or
directories in a particular backup. Merging of incrementals is done
automatically, so you don't need to worry about whether certain files
appear in the incremental or full backup.
The usage is:
BackupPC_zipCreate [options] files/directories...
Required options:
-h host host from which the zip archive is created
-n dumpNum dump number from which the tar archive is created
A negative number means relative to the end (eg -1
means the most recent dump, -2 2nd most recent etc).
-s shareName share name from which the zip archive is created
Other options:
-t print summary totals
-r pathRemove path prefix that will be replaced with pathAdd
-p pathAdd new path prefix
-c level compression level (default is 0, no compression)
-e charset charset for encoding file names (default: cp1252)
The command-line files and directories are relative to the specified
shareName. The zip file is written to stdout. The -h, -n and -s options
specify which dump is used to generate the zip archive. The -r and -p
options can be used to relocate the paths in the zip archive so extracted
files can be placed in a location different from their original
location.
Each of these programs reside in __INSTALLDIR__/bin.
Archive functions¶
BackupPC supports archiving to removable media. For users that require offsite
backups, BackupPC can create archives that stream to tape devices, or create
files of specified sizes to fit onto cd or dvd media.
Each archive type is specified by a BackupPC host with its XferMethod set to
'archive'. This allows for multiple configurations at sites where there might
be a combination of tape and cd/dvd backups being made.
BackupPC provides a menu that allows one or more hosts to be archived. The most
recent backup of each host is archived using BackupPC_tarCreate, and the
output is optionally compressed and split into fixed-sized files (eg: 650MB).
The archive for each host is done by default using
__INSTALLDIR__/bin/BackupPC_archiveHost. This script can be copied and
customized as needed.
Configuring an Archive Host¶
To create an Archive Host, add it to the hosts file just as any other host and
call it a name that best describes the type of archive, e.g. ArchiveDLT
To tell BackupPC that the Host is for Archives, create a config.pl file in the
Archive Hosts's pc directory, adding the following line:
$Conf{XferMethod} = 'archive';
To further customise the archive's parameters you can adding the changed
parameters in the host's config.pl file. The parameters are explained in the
config.pl file. Parameters may be fixed or the user can be allowed to change
them (eg: output device).
The per-host archive command is $Conf{ArchiveClientCmd}. By default this invokes
__INSTALLDIR__/bin/BackupPC_archiveHost
which you can copy and customize as necessary.
Starting an Archive¶
In the web interface, click on the Archive Host you wish to use. You will see a
list of previous archives and a summary on each. By clicking the "Start
Archive" button you are presented with the list of hosts and the
approximate backup size (note this is raw size, not projected compressed size)
Select the hosts you wish to archive and press the "Archive Selected
Hosts" button.
The next screen allows you to adjust the parameters for this archive run. Press
the "Start the Archive" to start archiving the selected hosts with
the parameters displayed.
Starting an Archive from the command line¶
The script BackupPC_archiveStart can be used to start an archive from the
command line (or cron etc). The usage is:
BackupPC_archiveStart archiveHost userName hosts...
This creates an archive of the most recent backup of each of the specified
hosts. The first two arguments are the archive host and the user name making
the request.
Other CGI Functions¶
Configuration and Host Editor¶
The CGI interface has a complete configuration and host editor. Only the
administrator can edit the main configuration settings and hosts. The edit
links are in the left navigation bar.
When changes are made to any parameter a "Save" button appears at the
top of the page. If you are editing a text box you will need to click outside
of the text box to make the Save button appear. If you don't select Save then
the changes won't be saved.
The host-specific configuration can be edited from the host summary page using
the link in the left navigation bar. The administrator can edit any of the
host-specific configuration settings.
When editing the host-specific configuration, each parameter has an
"override" setting that denotes the value is host-specific, meaning
that it overrides the setting in the main configuration. If you unselect
"override" then the setting is removed from the host-specific
configuration, and the main configuration file is displayed.
User's can edit their host-specific configuration if enabled via
$Conf{CgiUserConfigEditEnable}. The specific subset of configuration settings
that a user can edit is specified with $Conf{CgiUserConfigEdit}. It is
recommended to make this list short as possible (you probably don't want your
users saving dozens of backups) and it is essential that they can't edit any
of the Cmd configuration settings, otherwise they can specify an arbitrary
command that will be executed as the BackupPC user.
BackupPC supports a very basic RSS feed. Provided you have the XML::RSS perl
module installed, a URL similar to this will provide RSS information:
http://localhost/cgi-bin/BackupPC/BackupPC_Admin?action=rss
This feature is experimental. The information included will probably change.
BackupPC Design¶
Some design issues¶
- Pooling common files
- To quickly see if a file is already in the pool, an MD5
digest of the file length and contents is used as the file name in the
pool. This can't guarantee a file is identical: it just reduces the search
to often a single file or handful of files. A complete file comparison is
always done to verify if two files are really the same.
Identical files on multiples backups are represented by hard links.
Hardlinks are used so that identical files all refer to the same physical
file on the server's disk. Also, hard links maintain reference counts so
that BackupPC knows when to delete unused files from the pool.
For the computer-science majors among you, you can think of the pooling
system used by BackupPC as just a chained hash table stored on a (big)
file system.
- The hashing function
- There is a tradeoff between how much of file is used for
the MD5 digest and the time taken comparing all the files that have the
same hash.
Using the file length and just the first 4096 bytes of the file for the MD5
digest produces some repetitions. One example: with 900,000 unique files
in the pool, this hash gives about 7,000 repeated files, and in the worst
case 500 files have the same hash. That's not bad: we only have to do a
single file compare 99.2% of the time. But in the worst case we have to
compare as many as 500 files checking for a match.
With a modest increase in CPU time, if we use the file length and the first
256K of the file we now only have 500 repeated files and in the worst case
around 20 files have the same hash. Furthermore, if we instead use the
first and last 128K of the file (more specifically, the first and eighth
128K chunks for files larger than 1MB) we get only 300 repeated files and
in the worst case around 20 files have the same hash.
Based on this experimentation, this is the hash function used by BackupPC.
It is important that you don't change the hash function after files are
already in the pool. Otherwise your pool will grow to twice the size until
all the old backups (and all the old files with old hashes) eventually
expire.
- Compression
- BackupPC supports compression. It uses the deflate and
inflate methods in the Compress::Zlib module, which is based on the zlib
compression library (see <http://www.gzip.org/zlib/>).
The $Conf{CompressLevel} setting specifies the compression level to use.
Zero (0) means no compression. Compression levels can be from 1 (least cpu
time, slightly worse compression) to 9 (most cpu time, slightly better
compression). The recommended value is 3. Changing it to 5, for example,
will take maybe 20% more cpu time and will get another 2-3% additional
compression. Diminishing returns set in above 5. See the zlib
documentation for more information about compression levels.
BackupPC implements compression with minimal CPU load. Rather than
compressing every incoming backup file and then trying to match it against
the pool, BackupPC computes the MD5 digest based on the uncompressed file,
and matches against the candidate pool files by comparing each
uncompressed pool file against the incoming backup file. Since inflating a
file takes roughly a factor of 10 less CPU time than deflating there is a
big saving in CPU time.
The combination of pooling common files and compression can yield a factor
of 8 or more overall saving in backup storage.
BackupPC operation¶
BackupPC reads the configuration information from __CONFDIR__/config.pl. It then
runs and manages all the backup activity. It maintains queues of pending
backup requests, user backup requests and administrative commands. Based on
the configuration various requests will be executed simultaneously.
As specified by $Conf{WakeupSchedule}, BackupPC wakes up periodically to queue
backups on all the PCs. This is a four step process:
- 1.
- For each host and DHCP address backup requests are queued
on the background command queue.
- 2.
- For each PC, BackupPC_dump is forked. Several of these may
be run in parallel, based on the configuration. First a ping is done to
see if the machine is alive. If this is a DHCP address, nmblookup is run
to get the netbios name, which is used as the host name. If DNS lookup
fails, $Conf{NmbLookupFindHostCmd} is run to find the IP address from the
host name. The file __TOPDIR__/pc/$host/backups is read to decide whether
a full or incremental backup needs to be run. If no backup is scheduled,
or the ping to $host fails, then BackupPC_dump exits.
The backup is done using the specified XferMethod. Either samba's smbclient
or tar over ssh/rsh/nfs piped into BackupPC_tarExtract, or rsync over
ssh/rsh is run, or rsyncd is connected to, with the incoming data
extracted to __TOPDIR__/pc/$host/new. The XferMethod output is put into
__TOPDIR__/pc/$host/XferLOG.
The letter in the XferLOG file shows the type of object, similar to the
first letter of the modes displayed by ls -l:
d -> directory
l -> symbolic link
b -> block special file
c -> character special file
p -> pipe file (fifo)
nothing -> regular file
The words mean:
- create
- new for this backup (ie: directory or file not in
pool)
- pool
- found a match in the pool
- same
- file is identical to previous backup (contents were
checksummed and verified during full dump).
- skip
- file skipped in incremental because attributes are the same
(only displayed if $Conf{XferLogLevel} >= 2).
As BackupPC_tarExtract extracts the files from smbclient or tar, or as rsync or
ftp runs, it checks each file in the backup to see if it is identical to an
existing file from any previous backup of any PC. It does this without needed
to write the file to disk. If the file matches an existing file, a hardlink is
created to the existing file in the pool. If the file does not match any
existing files, the file is written to disk and the file name is saved in
__TOPDIR__/pc/$host/NewFileList for later processing by BackupPC_link.
BackupPC_tarExtract and rsync can handle arbitrarily large files and multiple
candidate matching files without needing to write the file to disk in the case
of a match. This significantly reduces disk writes (and also reads, since the
pool file comparison is done disk to memory, rather than disk to disk).
Based on the configuration settings, BackupPC_dump checks each old backup to see
if any should be removed. Any expired backups are moved to __TOPDIR__/trash
for later removal by BackupPC_trashClean.
- 3.
- For each complete, good, backup, BackupPC_link is run. To
avoid race conditions as new files are linked into the pool area, only a
single BackupPC_link program runs at a time and the rest are queued.
BackupPC_link reads the NewFileList written by BackupPC_dump and inspects
each new file in the backup. It re-checks if there is a matching file in
the pool (another BackupPC_link could have added the file since
BackupPC_dump checked). If so, the file is removed and replaced by a hard
link to the existing file. If the file is new, a hard link to the file is
made in the pool area, so that this file is available for checking against
each new file and new backup.
Then, if $Conf{IncrFill} is set (note that the default setting is off), for
each incremental backup, hard links are made in the new backup to all
files that were not extracted during the incremental backups. The means
the incremental backup looks like a complete image of the PC (with the
exception that files that were removed on the PC since the last full
backup will still appear in the backup directory tree).
The CGI interface knows how to merge unfilled incremental backups will the
most recent prior filled (full) backup, giving the incremental backups a
filled appearance. The default for $Conf{IncrFill} is off, since there is
no need to fill incremental backups. This saves some level of disk
activity, since lots of extra hardlinks are no longer needed (and don't
have to be deleted when the backup expires).
- 4.
- BackupPC_trashClean is always run in the background to
remove any expired backups. Every 5 minutes it wakes up and removes all
the files in __TOPDIR__/trash.
Also, once each night, BackupPC_nightly is run to complete some additional
administrative tasks, such as cleaning the pool. This involves removing
any files in the pool that only have a single hard link (meaning no
backups are using that file). Again, to avoid race conditions,
BackupPC_nightly is only run when there are no BackupPC_link processes
running. When BackupPC_nightly is run no new BackupPC_link jobs are
started. If BackupPC_nightly takes too long to run, the settings
$Conf{MaxBackupPCNightlyJobs} and $Conf{BackupPCNightlyPeriod} can be used
to run several BackupPC_nightly processes in parallel, and to split its
job over several nights.
BackupPC also listens for TCP connections on $Conf{ServerPort}, which is used by
the CGI script BackupPC_Admin for status reporting and user-initiated backup
or backup cancel requests.
Storage layout¶
BackupPC resides in several directories:
- __INSTALLDIR__
- Perl scripts comprising BackupPC reside in
__INSTALLDIR__/bin, libraries are in __INSTALLDIR__/lib and documentation
is in __INSTALLDIR__/doc.
- __CGIDIR__
- The CGI script BackupPC_Admin resides in this cgi binary
directory.
- __CONFDIR__
- All the configuration information resides below
__CONFDIR__. This directory contains:
The directory __CONFDIR__ contains:
- config.pl
- Configuration file. See Configuration file below for more
details.
- hosts
- Hosts file, which lists all the PCs to backup.
- pc
- The directory __CONFDIR__/pc contains per-client
configuration files that override settings in the main configuration file.
Each file is named __CONFDIR__/pc/HOST.pl, where HOST is the host name.
In pre-FHS versions of BackupPC these files were located in
__TOPDIR__/pc/HOST/config.pl.
- __LOGDIR__
- The directory __LOGDIR__ (__TOPDIR__/log on pre-FHS
versions of BackupPC) contains:
- LOG
- Current (today's) log file output from BackupPC.
- LOG.0 or LOG.0.z
- Yesterday's log file output. Log files are aged daily and
compressed (if compression is enabled), and old LOG files are
deleted.
- BackupPC.pid
- Contains BackupPC's process id.
- status.pl
- A summary of BackupPC's status written periodically by
BackupPC so that certain state information can be maintained if BackupPC
is restarted. Should not be edited.
- UserEmailInfo.pl
- A summary of what email was last sent to each user, and
when the last email was sent. Should not be edited.
- __TOPDIR__
- All of BackupPC's data (PC backup images, logs,
configuration information) is stored below this directory.
Below __TOPDIR__ are several directories:
- __TOPDIR__/trash
- Any directories and files below this directory are
periodically deleted whenever BackupPC_trashClean checks. When a backup is
aborted or when an old backup expires, BackupPC_dump simply moves the
directory to __TOPDIR__/trash for later removal by
BackupPC_trashClean.
- __TOPDIR__/pool
- All uncompressed files from PC backups are stored below
__TOPDIR__/pool. Each file's name is based on the MD5 hex digest of the
file contents. Specifically, for files less than 256K, the file length and
the entire file is used. For files up to 1MB, the file length and the
first and last 128K are used. Finally, for files longer than 1MB, the file
length, and the first and eighth 128K chunks for the file are used.
Each file is stored in a subdirectory X/Y/Z, where X, Y, Z are the first 3
hex digits of the MD5 digest.
For example, if a file has an MD5 digest of 123456789abcdef0, the file is
stored in __TOPDIR__/pool/1/2/3/123456789abcdef0.
The MD5 digest might not be unique (especially since not all the file's
contents are used for files bigger than 256K). Different files that have
the same MD5 digest are stored with a trailing suffix "_n" where
n is an incrementing number starting at 0. So, for example, if two
additional files were identical to the first, except the last byte was
different, and assuming the file was larger than 1MB (so the MD5 digests
are the same but the files are actually different), the three files would
be stored as:
__TOPDIR__/pool/1/2/3/123456789abcdef0
__TOPDIR__/pool/1/2/3/123456789abcdef0_0
__TOPDIR__/pool/1/2/3/123456789abcdef0_1
Both BackupPC_dump (actually, BackupPC_tarExtract) and BackupPC_link are
responsible for checking newly backed up files against the pool. For each
file, the MD5 digest is used to generate a file name in the pool
directory. If the file exists in the pool, the contents are compared. If
there is no match, additional files ending in "_n" are checked.
(Actually, BackupPC_tarExtract compares multiple candidate files in
parallel.) If the file contents exactly match, the file is created by
simply making a hard link to the pool file (this is done by
BackupPC_tarExtract as the backup proceeds). Otherwise,
BackupPC_tarExtract writes the new file to disk and a new hard link is
made in the pool to the file (this is done later by BackupPC_link).
Therefore, every file in the pool will have at least 2 hard links (one for
the pool file and one for the backup file below __TOPDIR__/pc). Identical
files from different backups or PCs will all be linked to the same file.
When old backups are deleted, some files in the pool might only have one
link. BackupPC_nightly checks the entire pool and removes all files that
have only a single link, thereby recovering the storage for that file.
One other issue: zero length files are not pooled, since there are a lot of
these files and on most file systems it doesn't save any disk space to
turn these files into hard links.
- __TOPDIR__/cpool
- All compressed files from PC backups are stored below
__TOPDIR__/cpool. Its layout is the same as __TOPDIR__/pool, and the
hashing function is the same (and, importantly, based on the uncompressed
file, not the compressed file).
- __TOPDIR__/pc/$host
- For each PC $host, all the backups for that PC are stored
below the directory __TOPDIR__/pc/$host. This directory contains the
following files:
- LOG
- Current log file for this PC from BackupPC_dump.
- LOG.DDMMYYYY or LOG.DDMMYYYY.z
- Last month's log file. Log files are aged monthly and
compressed (if compression is enabled), and old LOG files are deleted. In
earlier versions of BackupPC these files used to have a suffix of 0, 1,
....
- XferERR or XferERR.z
- Output from the transport program (ie: smbclient, tar,
rsync or ftp) for the most recent failed backup.
- new
- Subdirectory in which the current backup is stored. This
directory is renamed if the backup succeeds.
- XferLOG or XferLOG.z
- Output from the transport program (ie: smbclient, tar,
rsync or ftp) for the current backup.
- nnn (an integer)
- Successful backups are in directories numbered sequentially
starting at 0.
- XferLOG.nnn or XferLOG.nnn.z
- Output from the transport program (ie: smbclient, tar,
rsync or ftp) corresponding to backup number nnn.
- RestoreInfo.nnn
- Information about restore request #nnn including who, what,
when, and why. This file is in Data::Dumper format. (Note that the restore
numbers are not related to the backup number.)
- RestoreLOG.nnn.z
- Output from smbclient, tar or rsync during restore #nnn.
(Note that the restore numbers are not related to the backup number.)
- ArchiveInfo.nnn
- Information about archive request #nnn including who, what,
when, and why. This file is in Data::Dumper format. (Note that the archive
numbers are not related to the restore or backup number.)
- ArchiveLOG.nnn.z
- Output from archive #nnn. (Note that the archive numbers
are not related to the backup or restore number.)
- config.pl
- Old location of optional configuration settings specific to
this host. Settings in this file override the main configuration file. In
new versions of BackupPC the per-host configuration files are stored in
__CONFDIR__/pc/HOST.pl.
- backups
- A tab-delimited ascii table listing information about each
successful backup, one per row. The columns are:
- num
- The backup number, an integer that starts at 0 and
increments for each successive backup. The corresponding backup is stored
in the directory num (eg: if this field is 5, then the backup is stored in
__TOPDIR__/pc/$host/5).
- type
- Set to "full" or "incr" for full or
incremental backup.
- startTime
- Start time of the backup in unix seconds.
- endTime
- Stop time of the backup in unix seconds.
- nFiles
- Number of files backed up (as reported by smbclient, tar,
rsync or ftp).
- size
- Total file size backed up (as reported by smbclient, tar,
rsync or ftp).
- nFilesExist
- Number of files that were already in the pool (as
determined by BackupPC_dump and BackupPC_link).
- sizeExist
- Total size of files that were already in the pool (as
determined by BackupPC_dump and BackupPC_link).
- nFilesNew
- Number of files that were not in the pool (as determined by
BackupPC_link).
- sizeNew
- Total size of files that were not in the pool (as
determined by BackupPC_link).
- xferErrs
- Number of errors or warnings from smbclient, tar, rsync or
ftp.
- xferBadFile
- Number of errors from smbclient that were bad file errors
(zero otherwise).
- xferBadShare
- Number of errors from smbclient that were bad share errors
(zero otherwise).
- tarErrs
- Number of errors from BackupPC_tarExtract.
- compress
- The compression level used on this backup. Zero or empty
means no compression.
- sizeExistComp
- Total compressed size of files that were already in the
pool (as determined by BackupPC_dump and BackupPC_link).
- sizeNewComp
- Total compressed size of files that were not in the pool
(as determined by BackupPC_link).
- noFill
- Set if this backup has not been filled in with the most
recent previous filled or full backup. See $Conf{IncrFill}.
- fillFromNum
- If this backup was filled (ie: noFill is 0) then this is
the number of the backup that it was filled from
- mangle
- Set if this backup has mangled file names and attributes.
Always true for backups in v1.4.0 and above. False for all backups prior
to v1.4.0.
- xferMethod
- Set to the value of $Conf{XferMethod} when this dump was
done.
- level
- The level of this dump. A full dump is level 0. Currently
incrementals are 1. But when multi-level incrementals are supported this
will reflect each dump's incremental level.
- restores
- A tab-delimited ascii table listing information about each
requested restore, one per row. The columns are:
- num
- Restore number (matches the suffix of the RestoreInfo.nnn
and RestoreLOG.nnn.z file), unrelated to the backup number.
- startTime
- Start time of the restore in unix seconds.
- endTime
- End time of the restore in unix seconds.
- result
- Result (ok or failed).
- errorMsg
- Error message if restore failed.
- nFiles
- Number of files restored.
- size
- Size in bytes of the restored files.
- tarCreateErrs
- Number of errors from BackupPC_tarCreate during
restore.
- xferErrs
- Number of errors from smbclient, tar, rsync or ftp during
restore.
- archives
- A tab-delimited ascii table listing information about each
requested archive, one per row. The columns are:
- num
- Archive number (matches the suffix of the ArchiveInfo.nnn
and ArchiveLOG.nnn.z file), unrelated to the backup or restore
number.
- startTime
- Start time of the restore in unix seconds.
- endTime
- End time of the restore in unix seconds.
- result
- Result (ok or failed).
- errorMsg
- Error message if archive failed.
The compressed file format is as generated by Compress::Zlib::deflate with one
minor, but important, tweak. Since Compress::Zlib::inflate fully inflates its
argument in memory, it could take large amounts of memory if it was inflating
a highly compressed file. For example, a 200MB file of 0x0 bytes compresses to
around 200K bytes. If Compress::Zlib::inflate was called with this single 200K
buffer, it would need to allocate 200MB of memory to return the result.
BackupPC watches how efficiently a file is compressing. If a big file has very
high compression (meaning it will use too much memory when it is inflated),
BackupPC calls the
flush() method, which gracefully completes the
current compression. BackupPC then starts another deflate and simply appends
the output file. So the BackupPC compressed file format is one or more
concatenated deflations/flushes. The specific ratios that BackupPC uses is
that if a 6MB chunk compresses to less than 64K then a flush will be done.
Back to the example of the 200MB file of 0x0 bytes. Adding flushes every 6MB
adds only 200 or so bytes to the 200K output. So the storage cost of flushing
is negligible.
To easily decompress a BackupPC compressed file, the script BackupPC_zcat can be
found in __INSTALLDIR__/bin. For each file name argument it inflates the file
and writes it to stdout.
Rsync checksum caching¶
An incremental backup with rsync compares attributes on the client with the last
full backup. Any files with identical attributes are skipped. A full backup
with rsync sets the --ignore-times option, which causes every file to be
examined independent of attributes.
Each file is examined by generating block checksums (default 2K blocks) on the
receiving side (that's the BackupPC side), sending those checksums to the
client, where the remote rsync matches those checksums with the corresponding
file. The matching blocks and new data is sent back, allowing the client file
to be reassembled. A checksum for the entire file is sent to as an extra check
the the reconstructed file is correct.
This results in significant disk IO and computation for BackupPC: every file in
a full backup, or any file with non-matching attributes in an incremental
backup, needs to be uncompressed, block checksums computed and sent. Then the
receiving side reassembles the file and has to verify the whole-file checksum.
Even if the file is identical, prior to 2.1.0, BackupPC had to read and
uncompress the file twice, once to compute the block checksums and later to
verify the whole-file checksum.
Starting in 2.1.0, BackupPC supports optional checksum caching, which means the
block and file checksums only need to be computed once for each file. This
results in a significant performance improvement. This only works for
compressed pool files. It is enabled by adding
'--checksum-seed=32761',
to $Conf{RsyncArgs} and $Conf{RsyncRestoreArgs}.
Rsync versions prior to and including rsync-2.6.2 need a small patch to add
support for the --checksum-seed option. This patch is available in the
cygwin-rsyncd package at <
http://backuppc.sourceforge.net>. This patch
is already included in rsync CVS, so it will be standard in future versions of
rsync.
When this option is present, BackupPC will add block and file checksums to the
compressed pool file the next time a pool file is used and it doesn't already
have cached checksums. The first time a new file is written to the pool, the
checksums are not appended. The next time checksums are needed for a file,
they are computed and added. So the full performance benefit of checksum
caching won't be noticed until the third time a pool file is used (eg: the
third full backup).
With checksum caching enabled, there is a risk that should a file's contents in
the pool be corrupted due to a disk problem, but the cached checksums are
still correct, the corruption will not be detected by a full backup, since the
file contents are no longer read and compared. To reduce the chance that this
remains undetected, BackupPC can recheck cached checksums for a fraction of
the files. This fraction is set with the $Conf{RsyncCsumCacheVerifyProb}
setting. The default value of 0.01 means that 1% of the time a file's
checksums are read, the checksums are verified. This reduces performance
slightly, but, over time, ensures that files contents are in sync with the
cached checksums.
The format of the cached checksum data can be discovered by looking at the code.
Basically, the first byte of the compressed file is changed to denote that
checksums are appended. The block and file checksum data, plus some other
information and magic word, are appended to the compressed file. This allows
the cache update to be done in-place.
File name mangling¶
Backup file names are stored in "mangled" form. Each node of a path is
preceded by "f" (mnemonic: file), and special characters (\n, \r, %
and /) are URI-encoded as "%xx", where xx is the ascii character's
hex value. So c:/craig/example.txt is now stored as fc/fcraig/fexample.txt.
This was done mainly so meta-data could be stored alongside the backup files
without name collisions. In particular, the attributes for the files in a
directory are stored in a file called "attrib", and mangling avoids
file name collisions (I discarded the idea of having a duplicate directory
tree for every backup just to store the attributes). Other meta-data (eg:
rsync checksums) could be stored in file names preceded by, eg, "c".
There are two other benefits to mangling: the share name might contain
"/" (eg: "/home/craig" for tar transport), and I wanted
that represented as a single level in the storage tree. Secondly, as files are
written to NewFileList for later processing by BackupPC_link, embedded
newlines in the file's path will cause problems which are avoided by mangling.
The CGI script undoes the mangling, so it is invisible to the user. Old
(unmangled) backups are still supported by the CGI interface.
Special files¶
Linux/unix file systems support several special file types: symbolic links,
character and block device files, fifos (pipes) and unix-domain sockets. All
except unix-domain sockets are supported by BackupPC (there's no point in
backing up or restoring unix-domain sockets since they only have meaning after
a process creates them). Symbolic links are stored as a plain file whose
contents are the contents of the link (not the file it points to). This file
is compressed and pooled like any normal file. Character and block device
files are also stored as plain files, whose contents are two integers
separated by a comma; the numbers are the major and minor device number. These
files are compressed and pooled like any normal file. Fifo files are stored as
empty plain files (which are not pooled since they have zero size). In all
cases, the original file type is stored in the attrib file so it can be
correctly restored.
Hardlinks are also supported. When GNU tar first encounters a file with more
than one link (ie: hardlinks) it dumps it as a regular file. When it sees the
second and subsequent hardlinks to the same file, it dumps just the hardlink
information. BackupPC correctly recognizes these hardlinks and stores them
just like symlinks: a regular text file whose contents is the path of the file
linked to. The CGI script will download the original file when you click on a
hardlink.
Also, BackupPC_tarCreate has enough magic to re-create the hardlinks dynamically
based on whether or not the original file and hardlinks are both included in
the tar file. For example, imagine a/b/x is a hardlink to a/c/y. If you use
BackupPC_tarCreate to restore directory a, then the tar file will include
a/b/x as the original file and a/c/y will be a hardlink to a/b/x. If, instead
you restore a/c, then the tar file will include a/c/y as the original file,
not a hardlink.
The unix attributes for the contents of a directory (all the files and
directories in that directory) are stored in a file called attrib. There is a
single attrib file for each directory in a backup. For example, if c:/craig
contains a single file c:/craig/example.txt, that file would be stored as
fc/fcraig/fexample.txt and there would be an attribute file in
fc/fcraig/attrib (and also fc/attrib and ./attrib). The file fc/fcraig/attrib
would contain a single entry containing the attributes for
fc/fcraig/fexample.txt.
The attrib file starts with a magic number, followed by the concatenation of the
following information for each file:
- •
- File name length in perl's pack "w" format
(variable length base 128).
- •
- File name.
- •
- The unix file type, mode, uid, gid and file size divided by
4GB and file size modulo 4GB (type mode uid gid sizeDiv4GB sizeMod4GB), in
perl's pack "w" format (variable length base 128).
- •
- The unix mtime (unix seconds) in perl's pack "N"
format (32 bit integer).
The attrib file is also compressed if compression is enabled. See the
lib/BackupPC/Attrib.pm module for full details.
Attribute files are pooled just like normal backup files. This saves space if
all the files in a directory have the same attributes across multiple backups,
which is common.
Optimizations¶
BackupPC doesn't care about the access time of files in the pool since it saves
attribute meta-data separate from the files. Since BackupPC mostly does reads
from disk, maintaining the access time of files generates a lot of unnecessary
disk writes. So, provided BackupPC has a dedicated data disk, you should
consider mounting BackupPC's data directory with the noatime (or, with Linux
kernels >=2.6.20, relatime) attribute (see
mount(1)).
Limitations¶
BackupPC isn't perfect (but it is getting better). Please see
<
http://backuppc.sourceforge.net/faq/limitations.html> for a discussion
of some of BackupPC's limitations.
Security issues¶
Please see <
http://backuppc.sourceforge.net/faq/security.html> for a
discussion of some of various security issues.
Configuration File¶
The BackupPC configuration file resides in __CONFDIR__/config.pl. Optional
per-PC configuration files reside in __CONFDIR__/pc/$host.pl (or
__TOPDIR__/pc/$host/config.pl in non-FHS versions of BackupPC). This file can
be used to override settings just for a particular PC.
Modifying the main configuration file¶
The configuration file is a perl script that is executed by BackupPC, so you
should be careful to preserve the file syntax (punctuation, quotes etc) when
you edit it. It is recommended that you use CVS, RCS or some other method of
source control for changing config.pl.
BackupPC reads or re-reads the main configuration file and the hosts file in
three cases:
- •
- Upon startup.
- •
- When BackupPC is sent a HUP (-1) signal. Assuming you
installed the init.d script, you can also do this with
"/etc/init.d/backuppc reload".
- •
- When the modification time of config.pl file changes.
BackupPC checks the modification time once during each regular
wakeup.
Whenever you change the configuration file you can either do a kill -HUP
BackupPC_pid or simply wait until the next regular wakeup period.
Each time the configuration file is re-read a message is reported in the LOG
file, so you can tail it (or view it via the CGI interface) to make sure your
kill -HUP worked. Errors in parsing the configuration file are also reported
in the LOG file.
The optional per-PC configuration file (__CONFDIR__/pc/$host.pl or
__TOPDIR__/pc/$host/config.pl in non-FHS versions of BackupPC) is read
whenever it is needed by BackupPC_dump, BackupPC_link and others.
Configuration Parameters¶
The configuration parameters are divided into five general groups. The first
group (general server configuration) provides general configuration for
BackupPC. The next two groups describe what to backup, when to do it, and how
long to keep it. The fourth group are settings for email reminders, and the
final group contains settings for the CGI interface.
All configuration settings in the second through fifth groups can be overridden
by the per-PC config.pl file.
General server configuration¶
- $Conf{ServerHost} = '';
- Host name on which the BackupPC server is running.
- $Conf{ServerPort} = -1;
- TCP port number on which the BackupPC server listens for
and accepts connections. Normally this should be disabled (set to -1). The
TCP port is only needed if apache runs on a different machine from
BackupPC. In that case, set this to any spare port number over 1024 (eg:
2359). If you enable the TCP port, make sure you set
$Conf{ServerMesgSecret} too!
- $Conf{ServerMesgSecret} = '';
- Shared secret to make the TCP port secure. Set this to a
hard to guess string if you enable the TCP port (ie: $Conf{ServerPort}
> 0).
To avoid possible attacks via the TCP socket interface, every client message
is protected by an MD5 digest. The MD5 digest includes four items:
- a seed that is sent to the client when the connection opens
- a sequence number that increments for each message
- a shared secret that is stored in $Conf{ServerMesgSecret}
- the message itself.
The message is sent in plain text preceded by the MD5 digest. A snooper can
see the plain-text seed sent by BackupPC and plain-text message from the
client, but cannot construct a valid MD5 digest since the secret
$Conf{ServerMesgSecret} is unknown. A replay attack is not possible since
the seed changes on a per-connection and per-message basis.
- $Conf{MyPath} = '/bin';
- PATH setting for BackupPC. An explicit value is necessary
for taint mode. Value shouldn't matter too much since all execs use
explicit paths. However, taint mode in perl will complain if this
directory is world writable.
- $Conf{UmaskMode} = 027;
- Permission mask for directories and files created by
BackupPC. Default value prevents any access from group other, and prevents
group write.
- $Conf{WakeupSchedule} = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23];
- Times at which we wake up, check all the PCs, and schedule
necessary backups. Times are measured in hours since midnight. Can be
fractional if necessary (eg: 4.25 means 4:15am).
If the hosts you are backing up are always connected to the network you
might have only one or two wakeups each night. This will keep the backup
activity after hours. On the other hand, if you are backing up laptops
that are only intermittently connected to the network you will want to
have frequent wakeups (eg: hourly) to maximize the chance that each laptop
is backed up.
Examples:
$Conf{WakeupSchedule} = [22.5]; # once per day at 10:30 pm.
$Conf{WakeupSchedule} = [2,4,6,8,10,12,14,16,18,20,22]; # every 2 hours
The default value is every hour except midnight.
The first entry of $Conf{WakeupSchedule} is when BackupPC_nightly is run.
You might want to re-arrange the entries in $Conf{WakeupSchedule} (they
don't have to be ascending) so that the first entry is when you want
BackupPC_nightly to run (eg: when you don't expect a lot of regular
backups to run).
- $Conf{MaxBackups} = 4;
- Maximum number of simultaneous backups to run. If there are
no user backup requests then this is the maximum number of simultaneous
backups.
- $Conf{MaxUserBackups} = 4;
- Additional number of simultaneous backups that users can
run. As many as $Conf{MaxBackups} + $Conf{MaxUserBackups} requests can run
at the same time.
- $Conf{MaxPendingCmds} = 15;
- Maximum number of pending link commands. New backups will
only be started if there are no more than $Conf{MaxPendingCmds} plus
$Conf{MaxBackups} number of pending link commands, plus running jobs. This
limit is to make sure BackupPC doesn't fall too far behind in running
BackupPC_link commands.
- $Conf{CmdQueueNice} = 10;
- Nice level at which CmdQueue commands (eg: BackupPC_link
and BackupPC_nightly) are run at.
- $Conf{MaxBackupPCNightlyJobs} = 2;
- How many BackupPC_nightly processes to run in parallel.
Each night, at the first wakeup listed in $Conf{WakeupSchedule},
BackupPC_nightly is run. Its job is to remove unneeded files in the pool,
ie: files that only have one link. To avoid race conditions,
BackupPC_nightly and BackupPC_link cannot run at the same time. Starting
in v3.0.0, BackupPC_nightly can run concurrently with backups
(BackupPC_dump).
So to reduce the elapsed time, you might want to increase this setting to
run several BackupPC_nightly processes in parallel (eg: 4, or even
8).
- $Conf{BackupPCNightlyPeriod} = 1;
- How many days (runs) it takes BackupPC_nightly to traverse
the entire pool. Normally this is 1, which means every night it runs, it
does traverse the entire pool removing unused pool files.
Other valid values are 2, 4, 8, 16. This causes BackupPC_nightly to traverse
1/2, 1/4, 1/8 or 1/16th of the pool each night, meaning it takes 2, 4, 8
or 16 days to completely traverse the pool. The advantage is that each
night the running time of BackupPC_nightly is reduced roughly in
proportion, since the total job is split over multiple days. The
disadvantage is that unused pool files take longer to get deleted, which
will slightly increase disk usage.
Note that even when $Conf{BackupPCNightlyPeriod} > 1, BackupPC_nightly
still runs every night. It just does less work each time it runs.
Examples:
$Conf{BackupPCNightlyPeriod} = 1; # entire pool is checked every night
$Conf{BackupPCNightlyPeriod} = 2; # two days to complete pool check
# (different half each night)
$Conf{BackupPCNightlyPeriod} = 4; # four days to complete pool check
# (different quarter each night)
- $Conf{MaxOldLogFiles} = 14;
- Maximum number of log files we keep around in log
directory. These files are aged nightly. A setting of 14 means the log
directory will contain about 2 weeks of old log files, in particular at
most the files LOG, LOG.0, LOG.1, ... LOG.13 (except today's LOG, these
files will have a .z extension if compression is on).
If you decrease this number after BackupPC has been running for a while you
will have to manually remove the older log files.
- $Conf{DfPath} = '';
- Full path to the df command. Security caution: normal users
should not allowed to write to this file or directory.
- $Conf{DfCmd} = '$dfPath $topDir';
- Command to run df. The following variables are substituted
at run-time:
$dfPath path to df ($Conf{DfPath})
$topDir top-level BackupPC data directory
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{SplitPath} = '';
- $Conf{ParPath} = '';
- $Conf{CatPath} = '';
- $Conf{GzipPath} = '';
- $Conf{Bzip2Path} = '';
- Full path to various commands for archiving
- $Conf{DfMaxUsagePct} = 95;
- Maximum threshold for disk utilization on the __TOPDIR__
filesystem. If the output from $Conf{DfPath} reports a percentage larger
than this number then no new regularly scheduled backups will be run.
However, user requested backups (which are usually incremental and tend to
be small) are still performed, independent of disk usage. Also, currently
running backups will not be terminated when the disk usage exceeds this
number.
- $Conf{TrashCleanSleepSec} = 300;
- How long BackupPC_trashClean sleeps in seconds between each
check of the trash directory. Once every 5 minutes should be
reasonable.
- $Conf{DHCPAddressRanges} = [];
- List of DHCP address ranges we search looking for PCs to
backup. This is an array of hashes for each class C address range. This is
only needed if hosts in the conf/hosts file have the dhcp flag set.
Examples:
# to specify 192.10.10.20 to 192.10.10.250 as the DHCP address pool
$Conf{DHCPAddressRanges} = [
{
ipAddrBase => '192.10.10',
first => 20,
last => 250,
},
];
# to specify two pools (192.10.10.20-250 and 192.10.11.10-50)
$Conf{DHCPAddressRanges} = [
{
ipAddrBase => '192.10.10',
first => 20,
last => 250,
},
{
ipAddrBase => '192.10.11',
first => 10,
last => 50,
},
];
- $Conf{BackupPCUser} = '';
- The BackupPC user.
- $Conf{TopDir} = '';
- $Conf{ConfDir} = '';
- $Conf{LogDir} = '';
- $Conf{InstallDir} = '';
- $Conf{CgiDir} = '';
- Important installation directories:
TopDir - where all the backup data is stored
ConfDir - where the main config and hosts files resides
LogDir - where log files and other transient information
InstallDir - where the bin, lib and doc installation dirs reside.
Note: you cannot change this value since all the
perl scripts include this path. You must reinstall
with configure.pl to change InstallDir.
CgiDir - Apache CGI directory for BackupPC_Admin
Note: it is STRONGLY recommended that you don't change the values here.
These are set at installation time and are here for reference and are used
during upgrades.
Instead of changing TopDir here it is recommended that you use a symbolic
link to the new location, or mount the new BackupPC store at the existing
$Conf{TopDir} setting.
- $Conf{BackupPCUserVerify} = 1;
- Whether BackupPC and the CGI script BackupPC_Admin verify
that they are really running as user $Conf{BackupPCUser}. If this flag is
set and the effective user id (euid) differs from $Conf{BackupPCUser} then
both scripts exit with an error. This catches cases where BackupPC might
be accidently started as root or the wrong user, or if the CGI script is
not installed correctly.
- $Conf{HardLinkMax} = 31999;
- Maximum number of hardlinks supported by the $TopDir file
system that BackupPC uses. Most linux or unix file systems should support
at least 32000 hardlinks per file, or 64000 in other cases. If a pool file
already has this number of hardlinks, a new pool file is created so that
new hardlinks can be accommodated. This limit will only be hit if an
identical file appears at least this number of times across all the
backups.
- $Conf{PerlModuleLoad} = undef;
- Advanced option for asking BackupPC to load additional perl
modules. Can be a list (array ref) of module names to load at
startup.
- $Conf{ServerInitdPath} = '';
- $Conf{ServerInitdStartCmd} = '';
- Path to init.d script and command to use that script to
start the server from the CGI interface. The following variables are
substituted at run-time:
$sshPath path to ssh ($Conf{SshPath})
$serverHost same as $Conf{ServerHost}
$serverInitdPath path to init.d script ($Conf{ServerInitdPath})
Example:
$Conf{ServerInitdPath} = '/etc/init.d/backuppc'; $Conf{ServerInitdStartCmd}
= '$sshPath -q -x -l root $serverHost'
. ' $serverInitdPath start'
. ' < /dev/null >& /dev/null';
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
What to backup and when to do it¶
- $Conf{FullPeriod} = 6.97;
- Minimum period in days between full backups. A full dump
will only be done if at least this much time has elapsed since the last
full dump, and at least $Conf{IncrPeriod} days has elapsed since the last
successful dump.
Typically this is set slightly less than an integer number of days. The time
taken for the backup, plus the granularity of $Conf{WakeupSchedule} will
make the actual backup interval a bit longer.
- $Conf{IncrPeriod} = 0.97;
- Minimum period in days between incremental backups (a user
requested incremental backup will be done anytime on demand).
Typically this is set slightly less than an integer number of days. The time
taken for the backup, plus the granularity of $Conf{WakeupSchedule} will
make the actual backup interval a bit longer.
- $Conf{FullKeepCnt} = 1;
- Number of full backups to keep. Must be >= 1.
In the steady state, each time a full backup completes successfully the
oldest one is removed. If this number is decreased, the extra old backups
will be removed.
If filling of incremental dumps is off the oldest backup always has to be a
full (ie: filled) dump. This might mean one or two extra full dumps are
kept until the oldest incremental backups expire.
Exponential backup expiry is also supported. This allows you to specify:
- num fulls to keep at intervals of 1 * $Conf{FullPeriod}, followed by
- num fulls to keep at intervals of 2 * $Conf{FullPeriod},
- num fulls to keep at intervals of 4 * $Conf{FullPeriod},
- num fulls to keep at intervals of 8 * $Conf{FullPeriod},
- num fulls to keep at intervals of 16 * $Conf{FullPeriod},
and so on. This works by deleting every other full as each expiry boundary
is crossed.
Exponential expiry is specified using an array for $Conf{FullKeepCnt}:
$Conf{FullKeepCnt} = [4, 2, 3];
Entry #n specifies how many fulls to keep at an interval of 2^n *
$Conf{FullPeriod} (ie: 1, 2, 4, 8, 16, 32, ...).
The example above specifies keeping 4 of the most recent full backups (1
week interval) two full backups at 2 week intervals, and 3 full backups at
4 week intervals, eg:
full 0 19 weeks old \
full 1 15 weeks old >--- 3 backups at 4 * $Conf{FullPeriod}
full 2 11 weeks old /
full 3 7 weeks old \____ 2 backups at 2 * $Conf{FullPeriod}
full 4 5 weeks old /
full 5 3 weeks old \
full 6 2 weeks old \___ 4 backups at 1 * $Conf{FullPeriod}
full 7 1 week old /
full 8 current /
On a given week the spacing might be less than shown as each backup ages
through each expiry period. For example, one week later, a new full is
completed and the oldest is deleted, giving:
full 0 16 weeks old \
full 1 12 weeks old >--- 3 backups at 4 * $Conf{FullPeriod}
full 2 8 weeks old /
full 3 6 weeks old \____ 2 backups at 2 * $Conf{FullPeriod}
full 4 4 weeks old /
full 5 3 weeks old \
full 6 2 weeks old \___ 4 backups at 1 * $Conf{FullPeriod}
full 7 1 week old /
full 8 current /
You can specify 0 as a count (except in the first entry), and the array can
be as long as you wish. For example:
$Conf{FullKeepCnt} = [4, 0, 4, 0, 0, 2];
This will keep 10 full dumps, 4 most recent at 1 * $Conf{FullPeriod},
followed by 4 at an interval of 4 * $Conf{FullPeriod} (approx 1 month
apart), and then 2 at an interval of 32 * $Conf{FullPeriod} (approx 7-8
months apart).
Example: these two settings are equivalent and both keep just the four most
recent full dumps:
$Conf{FullKeepCnt} = 4;
$Conf{FullKeepCnt} = [4];
- $Conf{FullKeepCntMin} = 1;
- $Conf{FullAgeMax} = 90;
- Very old full backups are removed after $Conf{FullAgeMax}
days. However, we keep at least $Conf{FullKeepCntMin} full backups no
matter how old they are.
Note that $Conf{FullAgeMax} will be increased to $Conf{FullKeepCnt} times
$Conf{FullPeriod} if $Conf{FullKeepCnt} specifies enough full backups to
exceed $Conf{FullAgeMax}.
- $Conf{IncrKeepCnt} = 6;
- Number of incremental backups to keep. Must be >= 1.
In the steady state, each time an incr backup completes successfully the
oldest one is removed. If this number is decreased, the extra old backups
will be removed.
- $Conf{IncrKeepCntMin} = 1;
- $Conf{IncrAgeMax} = 30;
- Very old incremental backups are removed after
$Conf{IncrAgeMax} days. However, we keep at least $Conf{IncrKeepCntMin}
incremental backups no matter how old they are.
- $Conf{IncrLevels} = [1];
- Level of each incremental. "Level" follows the
terminology of dump(1). A full backup has level 0. A new
incremental of level N will backup all files that have changed since the
most recent backup of a lower level.
The entries of $Conf{IncrLevels} apply in order to each incremental after
each full backup. It wraps around until the next full backup. For example,
these two settings have the same effect:
$Conf{IncrLevels} = [1, 2, 3];
$Conf{IncrLevels} = [1, 2, 3, 1, 2, 3];
This means the 1st and 4th incrementals (level 1) go all the way back to the
full. The 2nd and 3rd (and 5th and 6th) backups just go back to the
immediate preceeding incremental.
Specifying a sequence of multi-level incrementals will usually mean more
than $Conf{IncrKeepCnt} incrementals will need to be kept, since lower
level incrementals are needed to merge a complete view of a backup. For
example, with
$Conf{FullPeriod} = 7;
$Conf{IncrPeriod} = 1;
$Conf{IncrKeepCnt} = 6;
$Conf{IncrLevels} = [1, 2, 3, 4, 5, 6];
there will be up to 11 incrementals in this case:
backup #0 (full, level 0, oldest)
backup #1 (incr, level 1)
backup #2 (incr, level 2)
backup #3 (incr, level 3)
backup #4 (incr, level 4)
backup #5 (incr, level 5)
backup #6 (incr, level 6)
backup #7 (full, level 0)
backup #8 (incr, level 1)
backup #9 (incr, level 2)
backup #10 (incr, level 3)
backup #11 (incr, level 4)
backup #12 (incr, level 5, newest)
Backup #1 (the oldest level 1 incremental) can't be deleted since backups
2..6 depend on it. Those 6 incrementals can't all be deleted since that
would only leave 5 (#8..12). When the next incremental happens (level 6),
the complete set of 6 older incrementals (#1..6) will be deleted, since
that maintains the required number ($Conf{IncrKeepCnt}) of incrementals.
This situation is reduced if you set shorter chains of multi-level
incrementals, eg:
$Conf{IncrLevels} = [1, 2, 3];
would only have up to 2 extra incremenals before all 3 are deleted.
BackupPC as usual merges the full and the sequence of incrementals together
so each incremental can be browsed and restored as though it is a complete
backup. If you specify a long chain of incrementals then more backups need
to be merged when browsing, restoring, or getting the starting point for
rsync backups. In the example above (levels 1..6), browing backup #6
requires 7 different backups (#0..6) to be merged.
Because of this merging and the additional incrementals that need to be
kept, it is recommended that some level 1 incrementals be included in
$Conf{IncrLevels}.
Prior to version 3.0 incrementals were always level 1, meaning each
incremental backed up all the files that changed since the last full.
- $Conf{BackupsDisable} = 0;
- Disable all full and incremental backups. These settings
are useful for a client that is no longer being backed up (eg: a retired
machine), but you wish to keep the last backups available for browsing or
restoring to other machines.
There are three values for $Conf{BackupsDisable}:
0 Backups are enabled.
1 Don't do any regular backups on this client. Manually
requested backups (via the CGI interface) will still occur.
2 Don't do any backups on this client. Manually requested
backups (via the CGI interface) will be ignored.
In versions prior to 3.0 Backups were disabled by setting $Conf{FullPeriod}
to -1 or -2.
- $Conf{PartialAgeMax} = 3;
- A failed full backup is saved as a partial backup. The
rsync XferMethod can take advantage of the partial full when the next
backup is run. This parameter sets the age of the partial full in days: if
the partial backup is older than this number of days, then rsync will
ignore (not use) the partial full when the next backup is run. If you set
this to a negative value then no partials will be saved. If you set this
to 0, partials will be saved, but will not be used by the next backup.
The default setting of 3 days means that a partial older than 3 days is
ignored when the next full backup is done.
- $Conf{IncrFill} = 0;
- Whether incremental backups are filled. "Filling"
means that the most recent full (or filled) dump is merged into the new
incremental dump using hardlinks. This makes an incremental dump look like
a full dump. Prior to v1.03 all incremental backups were filled. In v1.4.0
and later the default is off.
BackupPC, and the cgi interface in particular, do the right thing on
un-filled incremental backups. It will correctly display the merged
incremental backup with the most recent filled backup, giving the
un-filled incremental backups a filled appearance. That means it invisible
to the user whether incremental dumps are filled or not.
Filling backups takes a little extra disk space, and it does cost some extra
disk activity for filling, and later removal. Filling is no longer useful,
since file mangling and compression doesn't make a filled backup very
useful. It's likely the filling option will be removed from future
versions: filling will be delegated to the display and extraction of
backup data.
If filling is off, BackupPC makes sure that the oldest backup is a full,
otherwise the following incremental backups will be incomplete. This might
mean an extra full backup has to be kept until the following incremental
backups expire.
The default is off. You can turn this on or off at any time without
affecting existing backups.
- $Conf{RestoreInfoKeepCnt} = 10;
- Number of restore logs to keep. BackupPC remembers
information about each restore request. This number per client will be
kept around before the oldest ones are pruned.
Note: files/dirs delivered via Zip or Tar downloads don't count as restores.
Only the first restore option (where the files and dirs are written to the
host) count as restores that are logged.
- $Conf{ArchiveInfoKeepCnt} = 10;
- Number of archive logs to keep. BackupPC remembers
information about each archive request. This number per archive client
will be kept around before the oldest ones are pruned.
- $Conf{BackupFilesOnly} = undef;
- List of directories or files to backup. If this is defined,
only these directories or files will be backed up.
For Smb, only one of $Conf{BackupFilesExclude} and $Conf{BackupFilesOnly}
can be specified per share. If both are set for a particular share, then
$Conf{BackupFilesOnly} takes precedence and $Conf{BackupFilesExclude} is
ignored.
This can be set to a string, an array of strings, or, in the case of
multiple shares, a hash of strings or arrays. A hash is used to give a
list of directories or files to backup for each share (the share name is
the key). If this is set to just a string or array, and
$Conf{SmbShareName} contains multiple share names, then the setting is
assumed to apply all shares.
If a hash is used, a special key "*" means it applies to all
shares that don't have a specific entry.
Examples:
$Conf{BackupFilesOnly} = '/myFiles';
$Conf{BackupFilesOnly} = ['/myFiles']; # same as first example
$Conf{BackupFilesOnly} = ['/myFiles', '/important'];
$Conf{BackupFilesOnly} = {
'c' => ['/myFiles', '/important'], # these are for 'c' share
'd' => ['/moreFiles', '/archive'], # these are for 'd' share
};
$Conf{BackupFilesOnly} = {
'c' => ['/myFiles', '/important'], # these are for 'c' share
'*' => ['/myFiles', '/important'], # these are other shares
};
- $Conf{BackupFilesExclude} = undef;
- List of directories or files to exclude from the backup.
For Smb, only one of $Conf{BackupFilesExclude} and $Conf{BackupFilesOnly}
can be specified per share. If both are set for a particular share, then
$Conf{BackupFilesOnly} takes precedence and $Conf{BackupFilesExclude} is
ignored.
This can be set to a string, an array of strings, or, in the case of
multiple shares, a hash of strings or arrays. A hash is used to give a
list of directories or files to exclude for each share (the share name is
the key). If this is set to just a string or array, and
$Conf{SmbShareName} contains multiple share names, then the setting is
assumed to apply to all shares.
The exact behavior is determined by the underlying transport program,
smbclient or tar. For smbclient the exlclude file list is passed into the
X option. Simple shell wild-cards using "*" or "?" are
allowed.
For tar, if the exclude file contains a "/" it is assumed to be
anchored at the start of the string. Since all the tar paths start with
"./", BackupPC prepends a "." if the exclude file
starts with a "/". Note that GNU tar version >= 1.13.7 is
required for the exclude option to work correctly. For linux or unix
machines you should add "/proc" to $Conf{BackupFilesExclude}
unless you have specified --one-file-system in $Conf{TarClientCmd} or
--one-file-system in $Conf{RsyncArgs}. Also, for tar, do not use a
trailing "/" in the directory name: a trailing "/"
causes the name to not match and the directory will not be excluded.
Users report that for smbclient you should specify a directory followed by
"/*", eg: "/proc/*", instead of just
"/proc".
FTP servers are traversed recursively so excluding directories will also
exclude its contents. You can use the wildcard characters "*"
and "?" to define files for inclusion and exclusion. Both
attributes $Conf{BackupFilesOnly} and $Conf{BackupFilesExclude} can be
defined for the same share.
If a hash is used, a special key "*" means it applies to all
shares that don't have a specific entry.
Examples:
$Conf{BackupFilesExclude} = '/temp';
$Conf{BackupFilesExclude} = ['/temp']; # same as first example
$Conf{BackupFilesExclude} = ['/temp', '/winnt/tmp'];
$Conf{BackupFilesExclude} = {
'c' => ['/temp', '/winnt/tmp'], # these are for 'c' share
'd' => ['/junk', '/dont_back_this_up'], # these are for 'd' share
};
$Conf{BackupFilesExclude} = {
'c' => ['/temp', '/winnt/tmp'], # these are for 'c' share
'*' => ['/junk', '/dont_back_this_up'], # these are for other shares
};
- $Conf{BlackoutBadPingLimit} = 3;
- $Conf{BlackoutGoodCnt} = 7;
- PCs that are always or often on the network can be backed
up after hours, to reduce PC, network and server load during working
hours. For each PC a count of consecutive good pings is maintained. Once a
PC has at least $Conf{BlackoutGoodCnt} consecutive good pings it is
subject to "blackout" and not backed up during hours and days
specified by $Conf{BlackoutPeriods}.
To allow for periodic rebooting of a PC or other brief periods when a PC is
not on the network, a number of consecutive bad pings is allowed before
the good ping count is reset. This parameter is
$Conf{BlackoutBadPingLimit}.
Note that bad and good pings don't occur with the same interval. If a
machine is always on the network, it will only be pinged roughly once
every $Conf{IncrPeriod} (eg: once per day). So a setting for
$Conf{BlackoutGoodCnt} of 7 means it will take around 7 days for a machine
to be subject to blackout. On the other hand, if a ping is failed, it will
be retried roughly every time BackupPC wakes up, eg, every one or two
hours. So a setting for $Conf{BlackoutBadPingLimit} of 3 means that the PC
will lose its blackout status after 3-6 hours of unavailability.
To disable the blackout feature set $Conf{BlackoutGoodCnt} to a negative
value. A value of 0 will make all machines subject to blackout. But if you
don't want to do any backups during the day it would be easier to just set
$Conf{WakeupSchedule} to a restricted schedule.
- $Conf{BlackoutPeriods} = [ ... ];
- One or more blackout periods can be specified. If a client
is subject to blackout then no regular (non-manual) backups will be
started during any of these periods. hourBegin and hourEnd specify hours
fro midnight and weekDays is a list of days of the week where 0 is Sunday,
1 is Monday etc.
For example:
$Conf{BlackoutPeriods} = [
{
hourBegin => 7.0,
hourEnd => 19.5,
weekDays => [1, 2, 3, 4, 5],
},
];
specifies one blackout period from 7:00am to 7:30pm local time on Mon-Fri.
The blackout period can also span midnight by setting hourBegin >
hourEnd, eg:
$Conf{BlackoutPeriods} = [
{
hourBegin => 7.0,
hourEnd => 19.5,
weekDays => [1, 2, 3, 4, 5],
},
{
hourBegin => 23,
hourEnd => 5,
weekDays => [5, 6],
},
];
This specifies one blackout period from 7:00am to 7:30pm local time on
Mon-Fri, and a second period from 11pm to 5am on Friday and Saturday
night.
- $Conf{BackupZeroFilesIsFatal} = 1;
- A backup of a share that has zero files is considered
fatal. This is used to catch miscellaneous Xfer errors that result in no
files being backed up. If you have shares that might be empty (and
therefore an empty backup is valid) you should set this flag to 0.
How to backup a client¶
- $Conf{XferMethod} = 'smb';
- What transport method to use to backup each host. If you
have a mixed set of WinXX and linux/unix hosts you will need to override
this in the per-PC config.pl.
The valid values are:
- 'smb': backup and restore via smbclient and the SMB protocol.
Easiest choice for WinXX.
- 'rsync': backup and restore via rsync (via rsh or ssh).
Best choice for linux/unix. Good choice also for WinXX.
- 'rsyncd': backup and restore via rsync daemon on the client.
Best choice for linux/unix if you have rsyncd running on
the client. Good choice also for WinXX.
- 'tar': backup and restore via tar, tar over ssh, rsh or nfs.
Good choice for linux/unix.
- 'archive': host is a special archive host. Backups are not done.
An archive host is used to archive other host's backups
to permanent media, such as tape, CDR or DVD.
- $Conf{XferLogLevel} = 1;
- Level of verbosity in Xfer log files. 0 means be quiet, 1
will give will give one line per file, 2 will also show skipped files on
incrementals, higher values give more output.
- $Conf{ClientCharset} = '';
- Filename charset encoding on the client. BackupPC uses utf8
on the server for filename encoding. If this is empty, then utf8 is
assumed and client filenames will not be modified. If set to a different
encoding then filenames will converted to/from utf8 automatically during
backup and restore.
If the file names displayed in the browser (eg: accents or special
characters) don't look right then it is likely you haven't set
$Conf{ClientCharset} correctly.
If you are using smbclient on a WinXX machine, smbclient will convert to the
"unix charset" setting in smb.conf. The default is utf8, in
which case leave $Conf{ClientCharset} empty since smbclient does the right
conversion.
If you are using rsync on a WinXX machine then it does no conversion. A
typical WinXX encoding for latin1/western europe is 'cp1252', so in this
case set $Conf{ClientCharset} to 'cp1252'.
On a linux or unix client, run "locale charmap" to see the
client's charset. Set $Conf{ClientCharset} to this value. A typical value
for english/US is 'ISO-8859-1'.
Do "perldoc Encode::Supported" to see the list of possible charset
values. The FAQ at http://www.cl.cam.ac.uk/~mgk25/unicode.html is
excellent, and http://czyborra.com/charsets/iso8859.html provides more
information on the iso-8859 charsets.
- $Conf{ClientCharsetLegacy} = 'iso-8859-1';
- Prior to 3.x no charset conversion was done by BackupPC.
Backups were stored in what ever charset the XferMethod provided -
typically utf8 for smbclient and the client's locale settings for rsync
and tar (eg: cp1252 for rsync on WinXX and perhaps iso-8859-1 with rsync
on linux). This setting tells BackupPC the charset that was used to store
file names in old backups taken with BackupPC 2.x, so that non-ascii file
names in old backups can be viewed and restored.
Samba Configuration¶
- $Conf{SmbShareName} = 'C$';
- Name of the host share that is backed up when using SMB.
This can be a string or an array of strings if there are multiple shares
per host. Examples:
$Conf{SmbShareName} = 'c'; # backup 'c' share
$Conf{SmbShareName} = ['c', 'd']; # backup 'c' and 'd' shares
This setting only matters if $Conf{XferMethod} = 'smb'.
- $Conf{SmbShareUserName} = '';
- Smbclient share user name. This is passed to smbclient's -U
argument.
This setting only matters if $Conf{XferMethod} = 'smb'.
- $Conf{SmbSharePasswd} = '';
- Smbclient share password. This is passed to smbclient via
its PASSWD environment variable. There are several ways you can tell
BackupPC the smb share password. In each case you should be very careful
about security. If you put the password here, make sure that this file is
not readable by regular users! See the "Setting up config.pl"
section in the documentation for more information.
This setting only matters if $Conf{XferMethod} = 'smb'.
- $Conf{SmbClientPath} = '';
- Full path for smbclient. Security caution: normal users
should not allowed to write to this file or directory.
smbclient is from the Samba distribution. smbclient is used to actually
extract the incremental or full dump of the share filesystem from the PC.
This setting only matters if $Conf{XferMethod} = 'smb'.
- $Conf{SmbClientFullCmd} = '$smbClientPath
\\\\$host\\$shareName' ...
- Command to run smbclient for a full dump. This setting only
matters if $Conf{XferMethod} = 'smb'.
The following variables are substituted at run-time:
$smbClientPath same as $Conf{SmbClientPath}
$host host to backup/restore
$hostIP host IP address
$shareName share name
$userName user name
$fileList list of files to backup (based on exclude/include)
$I_option optional -I option to smbclient
$X_option exclude option (if $fileList is an exclude list)
$timeStampFile start time for incremental dump
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{SmbClientIncrCmd} = '$smbClientPath
\\\\$host\\$shareName' ...
- Command to run smbclient for an incremental dump. This
setting only matters if $Conf{XferMethod} = 'smb'.
Same variable substitutions are applied as $Conf{SmbClientFullCmd}.
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{SmbClientRestoreCmd} = '$smbClientPath
\\\\$host\\$shareName' ...
- Command to run smbclient for a restore. This setting only
matters if $Conf{XferMethod} = 'smb'.
Same variable substitutions are applied as $Conf{SmbClientFullCmd}.
If your smb share is read-only then direct restores will fail. You should
set $Conf{SmbClientRestoreCmd} to undef and the corresponding CGI restore
option will be removed.
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
Tar Configuration¶
- $Conf{TarShareName} = '/';
- Which host directories to backup when using tar transport.
This can be a string or an array of strings if there are multiple
directories to backup per host. Examples:
$Conf{TarShareName} = '/'; # backup everything
$Conf{TarShareName} = '/home'; # only backup /home
$Conf{TarShareName} = ['/home', '/src']; # backup /home and /src
The fact this parameter is called 'TarShareName' is for historical
consistency with the Smb transport options. You can use any valid
directory on the client: there is no need for it to correspond to any Smb
share or device mount point.
Note also that you can also use $Conf{BackupFilesOnly} to specify a specific
list of directories to backup. It's more efficient to use this option
instead of $Conf{TarShareName} since a new tar is run for each entry in
$Conf{TarShareName}.
On the other hand, if you add --one-file-system to $Conf{TarClientCmd} you
can backup each file system separately, which makes restoring one bad file
system easier. In this case you would list all of the mount points here,
since you can't get the same result with $Conf{BackupFilesOnly}:
$Conf{TarShareName} = ['/', '/var', '/data', '/boot'];
This setting only matters if $Conf{XferMethod} = 'tar'.
- $Conf{TarClientCmd} = '$sshPath -q -x -n -l root $host'
...
- Full command to run tar on the client. GNU tar is required.
You will need to fill in the correct paths for ssh2 on the local host
(server) and GNU tar on the client. Security caution: normal users should
not allowed to write to these executable files or directories.
See the documentation for more information about setting up ssh2 keys.
If you plan to use NFS then tar just runs locally and ssh2 is not needed.
For example, assuming the client filesystem is mounted below
/mnt/hostName, you could use something like:
$Conf{TarClientCmd} = '$tarPath -c -v -f - -C /mnt/$host/$shareName'
. ' --totals';
In the case of NFS or rsh you need to make sure BackupPC's privileges are
sufficient to read all the files you want to backup. Also, you will
probably want to add "/proc" to $Conf{BackupFilesExclude}.
The following variables are substituted at run-time:
$host host name
$hostIP host's IP address
$incrDate newer-than date for incremental backups
$shareName share name to backup (ie: top-level directory path)
$fileList specific files to backup or exclude
$tarPath same as $Conf{TarClientPath}
$sshPath same as $Conf{SshPath}
If a variable is followed by a "+" it is shell escaped. This is
necessary for the command part of ssh or rsh, since it ends up getting
passed through the shell.
This setting only matters if $Conf{XferMethod} = 'tar'.
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{TarFullArgs} = '$fileList+';
- Extra tar arguments for full backups. Several variables are
substituted at run-time. See $Conf{TarClientCmd} for the list of variable
substitutions.
If you are running tar locally (ie: without rsh or ssh) then remove the
"+" so that the argument is no longer shell escaped.
This setting only matters if $Conf{XferMethod} = 'tar'.
- $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
- Extra tar arguments for incr backups. Several variables are
substituted at run-time. See $Conf{TarClientCmd} for the list of variable
substitutions.
Note that GNU tar has several methods for specifying incremental backups,
including:
--newer-mtime $incrDate+
This causes a file to be included if the modification time is
later than $incrDate (meaning its contents might have changed).
But changes in the ownership or modes will not qualify the
file to be included in an incremental.
--newer=$incrDate+
This causes the file to be included if any attribute of the
file is later than $incrDate, meaning either attributes or
the modification time. This is the default method. Do
not use --atime-preserve in $Conf{TarClientCmd} above,
otherwise resetting the atime (access time) counts as an
attribute change, meaning the file will always be included
in each new incremental dump.
If you are running tar locally (ie: without rsh or ssh) then remove the
"+" so that the argument is no longer shell escaped.
This setting only matters if $Conf{XferMethod} = 'tar'.
- $Conf{TarClientRestoreCmd} = '$sshPath -q -x -l root $host'
...
- Full command to run tar for restore on the client. GNU tar
is required. This can be the same as $Conf{TarClientCmd}, with tar's -c
replaced by -x and ssh's -n removed.
See $Conf{TarClientCmd} for full details.
This setting only matters if $Conf{XferMethod} = "tar".
If you want to disable direct restores using tar, you should set
$Conf{TarClientRestoreCmd} to undef and the corresponding CGI restore
option will be removed.
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{TarClientPath} = '';
- Full path for tar on the client. Security caution: normal
users should not allowed to write to this file or directory.
This setting only matters if $Conf{XferMethod} = 'tar'.
Rsync/Rsyncd Configuration¶
- $Conf{RsyncClientPath} = '';
- Path to rsync executable on the client
- $Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host
$rsyncPath $argList+';
- Full command to run rsync on the client machine. The
following variables are substituted at run-time:
$host host name being backed up
$hostIP host's IP address
$shareName share name to backup (ie: top-level directory path)
$rsyncPath same as $Conf{RsyncClientPath}
$sshPath same as $Conf{SshPath}
$argList argument list, built from $Conf{RsyncArgs},
$shareName, $Conf{BackupFilesExclude} and
$Conf{BackupFilesOnly}
This setting only matters if $Conf{XferMethod} = 'rsync'.
- $Conf{RsyncClientRestoreCmd} = '$sshPath -q -x -l root
$host $rsyncPath $argList+';
- Full command to run rsync for restore on the client. The
following variables are substituted at run-time:
$host host name being backed up
$hostIP host's IP address
$shareName share name to backup (ie: top-level directory path)
$rsyncPath same as $Conf{RsyncClientPath}
$sshPath same as $Conf{SshPath}
$argList argument list, built from $Conf{RsyncArgs},
$shareName, $Conf{BackupFilesExclude} and
$Conf{BackupFilesOnly}
This setting only matters if $Conf{XferMethod} = 'rsync'.
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{RsyncShareName} = '/';
- Share name to backup. For $Conf{XferMethod} =
"rsync" this should be a file system path, eg '/' or '/home'.
For $Conf{XferMethod} = "rsyncd" this should be the name of the
module to backup (ie: the name from /etc/rsynd.conf).
This can also be a list of multiple file system paths or modules. For
example, by adding --one-file-system to $Conf{RsyncArgs} you can backup
each file system separately, which makes restoring one bad file system
easier. In this case you would list all of the mount points:
$Conf{RsyncShareName} = ['/', '/var', '/data', '/boot'];
- $Conf{RsyncdClientPort} = 873;
- Rsync daemon port on the client, for $Conf{XferMethod} =
"rsyncd".
- $Conf{RsyncdUserName} = '';
- Rsync daemon user name on client, for $Conf{XferMethod} =
"rsyncd". The user name and password are stored on the client in
whatever file the "secrets file" parameter in rsyncd.conf points
to (eg: /etc/rsyncd.secrets).
- $Conf{RsyncdPasswd} = '';
- Rsync daemon user name on client, for $Conf{XferMethod} =
"rsyncd". The user name and password are stored on the client in
whatever file the "secrets file" parameter in rsyncd.conf points
to (eg: /etc/rsyncd.secrets).
- $Conf{RsyncdAuthRequired} = 1;
- Whether authentication is mandatory when connecting to the
client's rsyncd. By default this is on, ensuring that BackupPC will refuse
to connect to an rsyncd on the client that is not password protected. Turn
off at your own risk.
- $Conf{RsyncCsumCacheVerifyProb} = 0.01;
- When rsync checksum caching is enabled (by adding the
--checksum-seed=32761 option to $Conf{RsyncArgs}), the cached checksums
can be occasionally verified to make sure the file contents matches the
cached checksums. This is to avoid the risk that disk problems might cause
the pool file contents to get corrupted, but the cached checksums would
make BackupPC think that the file still matches the client.
This setting is the probability (0 means never and 1 means always) that a
file will be rechecked. Setting it to 0 means the checksums will not be
rechecked (unless there is a phase 0 failure). Setting it to 1 (ie: 100%)
means all files will be checked, but that is not a desirable setting since
you are better off simply turning caching off (ie: remove the
--checksum-seed option).
The default of 0.01 means 1% (on average) of the files during a full backup
will have their cached checksum re-checked.
This setting has no effect unless checksum caching is turned on.
- $Conf{RsyncArgs} = [ ... ];
- Arguments to rsync for backup. Do not edit the first set
unless you have a thorough understanding of how File::RsyncP works.
- $Conf{RsyncArgsExtra} = [];
- Additional arguments added to RsyncArgs. This can be used
in conbination with $Conf{RsyncArgs} to allow customization of the rsync
arguments on a part-client basis. The standard arguments go in
$Conf{RsyncArgs} and $Conf{RsyncArgsExtra} can be set on a per-client
basis.
Examples of additional arguments that should work are --exclude/--include,
eg:
$Conf{RsyncArgsExtra} = [
'--exclude', '/proc',
'--exclude', '*.tmp',
];
Both $Conf{RsyncArgs} and $Conf{RsyncArgsExtra} are subject to the following
variable substitutions:
$client client name being backed up
$host host name (could be different from client name if
$Conf{ClientNameAlias} is set)
$hostIP IP address of host
$confDir configuration directory path
This allows settings of the form:
$Conf{RsyncArgsExtra} = [
'--exclude-from=$confDir/pc/$host.exclude',
];
- $Conf{RsyncRestoreArgs} = [ ... ];
- Arguments to rsync for restore. Do not edit the first set
unless you have a thorough understanding of how File::RsyncP works.
If you want to disable direct restores using rsync (eg: is the module is
read-only), you should set $Conf{RsyncRestoreArgs} to undef and the
corresponding CGI restore option will be removed.
$Conf{RsyncRestoreArgs} is subject to the following variable substitutions:
$client client name being backed up
$host host name (could be different from client name if
$Conf{ClientNameAlias} is set)
$hostIP IP address of host
$confDir configuration directory path
Note: $Conf{RsyncArgsExtra} doesn't apply to $Conf{RsyncRestoreArgs}.
FTP Configuration¶
- $Conf{FtpShareName} = '';
- Which host directories to backup when using FTP. This can
be a string or an array of strings if there are multiple shares per host.
This value must be specified in one of two ways: either as a subdirectory of
the 'share root' on the server, or as the absolute path of the directory.
In the following example, if the directory /home/username is the root share
of the ftp server with the given username, the following two values will
back up the same directory:
$Conf{FtpShareName} = 'www'; # www directory
$Conf{FtpShareName} = '/home/username/www'; # same directory
Path resolution is not supported; i.e.; you may not have an ftp share path
defined as '../otheruser' or '~/games'.
Multiple shares may also be specified, as with other protocols:
$Conf{FtpShareName} = [ 'www',
'bin',
'config' ];
Note also that you can also use $Conf{BackupFilesOnly} to specify a specific
list of directories to backup. It's more efficient to use this option
instead of $Conf{FtpShareName} since a new tar is run for each entry in
$Conf{FtpShareName}.
This setting only matters if $Conf{XferMethod} = 'ftp'.
- $Conf{FtpUserName} = '';
- FTP user name. This is used to log into the server.
This setting is used only if $Conf{XferMethod} = 'ftp'.
- $Conf{FtpPasswd} = '';
- FTP user password. This is used to log into the server.
This setting is used only if $Conf{XferMethod} = 'ftp'.
- $Conf{FtpPassive} = 1;
- Whether passive mode is used. The correct setting depends
upon whether local or remote ports are accessible from the other machine,
which is affected by any firewall or routers between the FTP server on the
client and the BackupPC server.
This setting is used only if $Conf{XferMethod} = 'ftp'.
- $Conf{FtpBlockSize} = 10240;
- Transfer block size. This sets the size of the amounts of
data in each frame. While undefined, this value takes the default value.
This setting is used only if $Conf{XferMethod} = 'ftp'.
- $Conf{FtpPort} = 21;
- The port of the ftp server. If undefined, 21 is used.
This setting is used only if $Conf{XferMethod} = 'ftp'.
- $Conf{FtpTimeout} = 120;
- Connection timeout for FTP. When undefined, the default is
120 seconds.
This setting is used only if $Conf{XferMethod} = 'ftp'.
- $Conf{FtpFollowSymlinks} = 0;
- Behaviour when BackupPC encounters symlinks on the FTP
share.
Symlinks cannot be restored via FTP, so the desired behaviour will be
different depending on the setup of the share. The default for this
behavor is 1. Directory shares with more complicated directory structures
should consider other protocols.
Archive Configuration¶
- $Conf{ArchiveDest} = '/tmp';
- Archive Destination
The Destination of the archive e.g. /tmp for file archive or /dev/nst0 for
device archive
- $Conf{ArchiveComp} = 'gzip';
- Archive Compression type
The valid values are:
- 'none': No Compression
- 'gzip': Medium Compression. Recommended.
- 'bzip2': High Compression but takes longer.
- $Conf{ArchivePar} = 0;
- Archive Parity Files
The amount of Parity data to generate, as a percentage of the archive size.
Uses the commandline par2 (par2cmdline) available from
http://parchive.sourceforge.net
Only useful for file dumps.
Set to 0 to disable this feature.
- $Conf{ArchiveSplit} = 0;
- Archive Size Split
Only for file archives. Splits the output into the specified size *
1,000,000. e.g. to split into 650,000,000 bytes, specify 650 below.
If the value is 0, or if $Conf{ArchiveDest} is an existing file or device
(e.g. a streaming tape drive), this feature is disabled.
- $Conf{ArchiveClientCmd} =
'$Installdir/bin/BackupPC_archiveHost' ...
- Archive Command
This is the command that is called to actually run the archive process for
each host. The following variables are substituted at run-time:
$Installdir The installation directory of BackupPC
$tarCreatePath The path to BackupPC_tarCreate
$splitpath The path to the split program
$parpath The path to the par2 program
$host The host to archive
$backupnumber The backup number of the host to archive
$compression The path to the compression program
$compext The extension assigned to the compression type
$splitsize The number of bytes to split archives into
$archiveloc The location to put the archive
$parfile The amount of parity data to create (percentage)
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{SshPath} = '';
- Full path for ssh. Security caution: normal users should
not allowed to write to this file or directory.
- $Conf{NmbLookupPath} = '';
- Full path for nmblookup. Security caution: normal users
should not allowed to write to this file or directory.
nmblookup is from the Samba distribution. nmblookup is used to get the
netbios name, necessary for DHCP hosts.
- $Conf{NmbLookupCmd} = '$nmbLookupPath -A $host';
- NmbLookup command. Given an IP address, does an nmblookup
on that IP address. The following variables are substituted at run-time:
$nmbLookupPath path to nmblookup ($Conf{NmbLookupPath})
$host IP address
This command is only used for DHCP hosts: given an IP address, this command
should try to find its NetBios name.
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{NmbLookupFindHostCmd} = '$nmbLookupPath $host';
- NmbLookup command. Given a netbios name, finds that host by
doing a NetBios lookup. Several variables are substituted at run-time:
$nmbLookupPath path to nmblookup ($Conf{NmbLookupPath})
$host NetBios name
In some cases you might need to change the broadcast address, for example if
nmblookup uses 192.168.255.255 by default and you find that doesn't work,
try 192.168.1.255 (or your equivalent class C address) using the -B
option:
$Conf{NmbLookupFindHostCmd} = '$nmbLookupPath -B 192.168.1.255 $host';
If you use a WINS server and your machines don't respond to multicast
NetBios requests you can use this (replace 1.2.3.4 with the IP address of
your WINS server):
$Conf{NmbLookupFindHostCmd} = '$nmbLookupPath -R -U 1.2.3.4 $host';
This is preferred over multicast since it minimizes network traffic.
Experiment manually for your site to see what form of nmblookup command
works.
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{FixedIPNetBiosNameCheck} = 0;
- For fixed IP address hosts, BackupPC_dump can also verify
the netbios name to ensure it matches the host name. An error is generated
if they do not match. Typically this flag is off. But if you are going to
transition a bunch of machines from fixed host addresses to DHCP, setting
this flag is a great way to verify that the machines have their netbios
name set correctly before turning on DCHP.
- $Conf{PingPath} = '';
- Full path to the ping command. Security caution: normal
users should not be allowed to write to this file or directory.
If you want to disable ping checking, set this to some program that exits
with 0 status, eg:
$Conf{PingPath} = '/bin/echo';
- $Conf{PingCmd} = '$pingPath -c 1 $host';
- Ping command. The following variables are substituted at
run-time:
$pingPath path to ping ($Conf{PingPath})
$host host name
Wade Brown reports that on solaris 2.6 and 2.7 ping -s returns the wrong
exit status (0 even on failure). Replace with "ping $host 1",
which gets the correct exit status but we don't get the round-trip time.
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{PingMaxMsec} = 20;
- Maximum round-trip ping time in milliseconds. This
threshold is set to avoid backing up PCs that are remotely connected
through WAN or dialup connections. The output from ping -s (assuming it is
supported on your system) is used to check the round-trip packet time. On
your local LAN round-trip times should be much less than 20msec. On most
WAN or dialup connections the round-trip time will be typically more than
20msec. Tune if necessary.
- $Conf{CompressLevel} = 0;
- Compression level to use on files. 0 means no compression.
Compression levels can be from 1 (least cpu time, slightly worse
compression) to 9 (most cpu time, slightly better compression). The
recommended value is 3. Changing to 5, for example, will take maybe 20%
more cpu time and will get another 2-3% additional compression. See the
zlib documentation for more information about compression levels.
Changing compression on or off after backups have already been done will
require both compressed and uncompressed pool files to be stored. This
will increase the pool storage requirements, at least until all the old
backups expire and are deleted.
It is ok to change the compression value (from one non-zero value to another
non-zero value) after dumps are already done. Since BackupPC matches pool
files by comparing the uncompressed versions, it will still correctly
match new incoming files against existing pool files. The new compression
level will take effect only for new files that are newly compressed and
added to the pool.
If compression was off and you are enabling compression for the first time
you can use the BackupPC_compressPool utility to compress the pool. This
avoids having the pool grow to accommodate both compressed and
uncompressed backups. See the documentation for more information.
Note: compression needs the Compress::Zlib perl library. If the
Compress::Zlib library can't be found then $Conf{CompressLevel} is forced
to 0 (compression off).
- $Conf{ClientTimeout} = 72000;
- Timeout in seconds when listening for the transport
program's (smbclient, tar etc) stdout. If no output is received during
this time, then it is assumed that something has wedged during a backup,
and the backup is terminated.
Note that stdout buffering combined with huge files being backed up could
cause longish delays in the output from smbclient that BackupPC_dump sees,
so in rare cases you might want to increase this value.
Despite the name, this parameter sets the timeout for all transport methods
(tar, smb etc).
- $Conf{MaxOldPerPCLogFiles} = 12;
- Maximum number of log files we keep around in each PC's
directory (ie: pc/$host). These files are aged monthly. A setting of 12
means there will be at most the files LOG, LOG.0, LOG.1, ... LOG.11 in the
pc/$host directory (ie: about a years worth). (Except this month's LOG,
these files will have a .z extension if compression is on).
If you decrease this number after BackupPC has been running for a while you
will have to manually remove the older log files.
- $Conf{DumpPreUserCmd} = undef;
- $Conf{DumpPostUserCmd} = undef;
- $Conf{DumpPreShareCmd} = undef;
- $Conf{DumpPostShareCmd} = undef;
- $Conf{RestorePreUserCmd} = undef;
- $Conf{RestorePostUserCmd} = undef;
- $Conf{ArchivePreUserCmd} = undef;
- $Conf{ArchivePostUserCmd} = undef;
- Optional commands to run before and after dumps and
restores, and also before and after each share of a dump.
Stdout from these commands will be written to the Xfer (or Restore) log
file. One example of using these commands would be to shut down and
restart a database server, dump a database to files for backup, or doing a
snapshot of a share prior to a backup. Example:
$Conf{DumpPreUserCmd} = '$sshPath -q -x -l root $host /usr/bin/dumpMysql';
The following variable substitutions are made at run time for
$Conf{DumpPreUserCmd}, $Conf{DumpPostUserCmd}, $Conf{DumpPreShareCmd} and
$Conf{DumpPostShareCmd}:
$type type of dump (incr or full)
$xferOK 1 if the dump succeeded, 0 if it didn't
$client client name being backed up
$host host name (could be different from client name if
$Conf{ClientNameAlias} is set)
$hostIP IP address of host
$user user name from the hosts file
$moreUsers list of additional users from the hosts file
$share the first share name (or current share for
$Conf{DumpPreShareCmd} and $Conf{DumpPostShareCmd})
$shares list of all the share names
$XferMethod value of $Conf{XferMethod} (eg: tar, rsync, smb)
$sshPath value of $Conf{SshPath},
$cmdType set to DumpPreUserCmd or DumpPostUserCmd
The following variable substitutions are made at run time for
$Conf{RestorePreUserCmd} and $Conf{RestorePostUserCmd}:
$client client name being backed up
$xferOK 1 if the restore succeeded, 0 if it didn't
$host host name (could be different from client name if
$Conf{ClientNameAlias} is set)
$hostIP IP address of host
$user user name from the hosts file
$moreUsers list of additional users from the hosts file
$share the first share name
$XferMethod value of $Conf{XferMethod} (eg: tar, rsync, smb)
$sshPath value of $Conf{SshPath},
$type set to "restore"
$bkupSrcHost host name of the restore source
$bkupSrcShare share name of the restore source
$bkupSrcNum backup number of the restore source
$pathHdrSrc common starting path of restore source
$pathHdrDest common starting path of destination
$fileList list of files being restored
$cmdType set to RestorePreUserCmd or RestorePostUserCmd
The following variable substitutions are made at run time for
$Conf{ArchivePreUserCmd} and $Conf{ArchivePostUserCmd}:
$client client name being backed up
$xferOK 1 if the archive succeeded, 0 if it didn't
$host Name of the archive host
$user user name from the hosts file
$share the first share name
$XferMethod value of $Conf{XferMethod} (eg: tar, rsync, smb)
$HostList list of hosts being archived
$BackupList list of backup numbers for the hosts being archived
$archiveloc location where the archive is sent to
$parfile amount of parity data being generated (percentage)
$compression compression program being used (eg: cat, gzip, bzip2)
$compext extension used for compression type (eg: raw, gz, bz2)
$splitsize size of the files that the archive creates
$sshPath value of $Conf{SshPath},
$type set to "archive"
$cmdType set to ArchivePreUserCmd or ArchivePostUserCmd
Note: all Cmds are executed directly without a shell, so the prog name needs
to be a full path and you can't include shell syntax like redirection and
pipes; put that in a script if you need it.
- $Conf{UserCmdCheckStatus} = 0;
- Whether the exit status of each PreUserCmd and PostUserCmd
is checked.
If set and the Dump/Restore/Archive Pre/Post UserCmd returns a non-zero exit
status then the dump/restore/archive is aborted. To maintain backward
compatibility (where the exit status in early versions was always
ignored), this flag defaults to 0.
If this flag is set and the Dump/Restore/Archive PreUserCmd fails then the
matching Dump/Restore/Archive PostUserCmd is not executed. If
DumpPreShareCmd returns a non-exit status, then DumpPostShareCmd is not
executed, but the DumpPostUserCmd is still run (since DumpPreUserCmd must
have previously succeeded).
An example of a DumpPreUserCmd that might fail is a script that snapshots or
dumps a database which fails because of some database error.
- $Conf{ClientNameAlias} = undef;
- Override the client's host name. This allows multiple
clients to all refer to the same physical host. This should only be set in
the per-PC config file and is only used by BackupPC at the last moment
prior to generating the command used to backup that machine (ie: the value
of $Conf{ClientNameAlias} is invisible everywhere else in BackupPC). The
setting can be a host name or IP address, eg:
$Conf{ClientNameAlias} = 'realHostName';
$Conf{ClientNameAlias} = '192.1.1.15';
will cause the relevant smb/tar/rsync backup/restore commands to be directed
to realHostName, not the client name.
Note: this setting doesn't work for hosts with DHCP set to 1.
Email reminders, status and messages¶
- $Conf{SendmailPath} = '';
- Full path to the sendmail command. Security caution: normal
users should not allowed to write to this file or directory.
- $Conf{EMailNotifyMinDays} = 2.5;
- Minimum period between consecutive emails to a single user.
This tries to keep annoying email to users to a reasonable level. Email
checks are done nightly, so this number is effectively rounded up (ie: 2.5
means a user will never receive email more than once every 3 days).
- $Conf{EMailFromUserName} = '';
- Name to use as the "from" name for email.
Depending upon your mail handler this is either a plain name (eg:
"admin") or a fully-qualified name (eg:
"admin@mydomain.com").
- $Conf{EMailAdminUserName} = '';
- Destination address to an administrative user who will
receive a nightly email with warnings and errors. If there are no warnings
or errors then no email will be sent. Depending upon your mail handler
this is either a plain name (eg: "admin") or a fully-qualified
name (eg: "admin@mydomain.com").
- $Conf{EMailUserDestDomain} = '';
- Destination domain name for email sent to users. By default
this is empty, meaning email is sent to plain, unqualified addresses.
Otherwise, set it to the destintation domain, eg:
$Cong{EMailUserDestDomain} = '@mydomain.com';
With this setting user email will be set to 'user@mydomain.com'.
- $Conf{EMailNoBackupEverSubj} = undef;
- $Conf{EMailNoBackupEverMesg} = undef;
- This subject and message is sent to a user if their PC has
never been backed up.
These values are language-dependent. The default versions can be found in
the language file (eg: lib/BackupPC/Lang/en.pm). If you need to change the
message, copy it here and edit it, eg:
$Conf{EMailNoBackupEverMesg} = <<'EOF';
To: $user$domain
cc:
Subject: $subj
Dear $userName,
This is a site-specific email message.
EOF
- $Conf{EMailNotifyOldBackupDays} = 7.0;
- How old the most recent backup has to be before notifying
user. When there have been no backups in this number of days the user is
sent an email.
- $Conf{EMailNoBackupRecentSubj} = undef;
- $Conf{EMailNoBackupRecentMesg} = undef;
- This subject and message is sent to a user if their PC has
not recently been backed up (ie: more than $Conf{EMailNotifyOldBackupDays}
days ago).
These values are language-dependent. The default versions can be found in
the language file (eg: lib/BackupPC/Lang/en.pm). If you need to change the
message, copy it here and edit it, eg:
$Conf{EMailNoBackupRecentMesg} = <<'EOF';
To: $user$domain
cc:
Subject: $subj
Dear $userName,
This is a site-specific email message.
EOF
- $Conf{EMailNotifyOldOutlookDays} = 5.0;
- How old the most recent backup of Outlook files has to be
before notifying user.
- $Conf{EMailOutlookBackupSubj} = undef;
- $Conf{EMailOutlookBackupMesg} = undef;
- This subject and message is sent to a user if their Outlook
files have not recently been backed up (ie: more than
$Conf{EMailNotifyOldOutlookDays} days ago).
These values are language-dependent. The default versions can be found in
the language file (eg: lib/BackupPC/Lang/en.pm). If you need to change the
message, copy it here and edit it, eg:
$Conf{EMailOutlookBackupMesg} = <<'EOF';
To: $user$domain
cc:
Subject: $subj
Dear $userName,
This is a site-specific email message.
EOF
- $Conf{EMailHeaders} = <<EOF;
- Additional email headers. This sets to charset to
utf8.
CGI user interface configuration settings¶
- $Conf{CgiAdminUserGroup} = '';
- $Conf{CgiAdminUsers} = '';
- Normal users can only access information specific to their
host. They can start/stop/browse/restore backups.
Administrative users have full access to all hosts, plus overall status and
log information.
The administrative users are the union of the unix/linux group
$Conf{CgiAdminUserGroup} and the manual list of users, separated by
spaces, in $Conf{CgiAdminUsers}. If you don't want a group or manual list
of users set the corresponding configuration setting to undef or an empty
string.
If you want every user to have admin privileges (careful!), set
$Conf{CgiAdminUsers} = '*'.
Examples:
$Conf{CgiAdminUserGroup} = 'admin';
$Conf{CgiAdminUsers} = 'craig celia';
--> administrative users are the union of group admin, plus
craig and celia.
$Conf{CgiAdminUserGroup} = '';
$Conf{CgiAdminUsers} = 'craig celia';
--> administrative users are only craig and celia'.
- $Conf{CgiURL} = undef;
- URL of the BackupPC_Admin CGI script. Used for email
messages.
- $Conf{Language} = 'en';
- Language to use. See lib/BackupPC/Lang for the list of
supported languages, which include English (en), French (fr), Spanish
(es), German (de), Italian (it), Dutch (nl), Polish (pl), Portuguese
Brazillian (pt_br) and Chinese (zh_CH).
Currently the Language setting applies to the CGI interface and email
messages sent to users. Log files and other text are still in
English.
- $Conf{CgiUserHomePageCheck} = '';
- $Conf{CgiUserUrlCreate} = 'mailto:%s';
- User names that are rendered by the CGI interface can be
turned into links into their home page or other information about the
user. To set this up you need to create two sprintf() strings, that
each contain a single '%s' that will be replaced by the user name. The
default is a mailto: link.
$Conf{CgiUserHomePageCheck} should be an absolute file path that is used to
check (via "-f") that the user has a valid home page. Set this
to undef or an empty string to turn off this check.
$Conf{CgiUserUrlCreate} should be a full URL that points to the user's home
page. Set this to undef or an empty string to turn off generation of URLs
for user names.
Example:
$Conf{CgiUserHomePageCheck} = '/var/www/html/users/%s.html';
$Conf{CgiUserUrlCreate} = 'http://myhost/users/%s.html';
--> if /var/www/html/users/craig.html exists, then 'craig' will
be rendered as a link to http://myhost/users/craig.html.
- $Conf{CgiDateFormatMMDD} = 1;
- Date display format for CGI interface. A value of 1 uses
US-style dates (MM/DD), a value of 2 uses full YYYY-MM-DD format, and zero
for international dates (DD/MM).
- $Conf{CgiNavBarAdminAllHosts} = 1;
- If set, the complete list of hosts appears in the left
navigation bar pull-down for administrators. Otherwise, just the hosts for
which the user is listed in the host file (as either the user or in
moreUsers) are displayed.
- $Conf{CgiSearchBoxEnable} = 1;
- Enable/disable the search box in the navigation bar.
- $Conf{CgiNavBarLinks} = [ ... ];
- Additional navigation bar links. These appear for both
regular users and administrators. This is a list of hashes giving the link
(URL) and the text (name) for the link. Specifying lname instead of name
uses the language specific string (ie: $Lang->{lname}) instead of just
literally displaying name.
- $Conf{CgiStatusHilightColor} = { ...
- Hilight colors based on status that are used in the PC
summary page.
- $Conf{CgiHeaders} = '<meta http-equiv="pragma"
content="no-cache">';
- Additional CGI header text.
- $Conf{CgiImageDir} = '';
- Directory where images are stored. This directory should be
below Apache's DocumentRoot. This value isn't used by BackupPC but is used
by configure.pl when you upgrade BackupPC.
Example:
$Conf{CgiImageDir} = '/var/www/htdocs/BackupPC';
- $Conf{CgiExt2ContentType} = { };
- Additional mappings of file name extenions to Content-Type
for individual file restore. See $Ext2ContentType in BackupPC_Admin for
the default setting. You can add additional settings here, or override any
default settings. Example:
$Conf{CgiExt2ContentType} = {
'pl' => 'text/plain',
};
- $Conf{CgiImageDirURL} = '';
- URL (without the leading http://host) for BackupPC's image
directory. The CGI script uses this value to serve up image files.
Example:
$Conf{CgiImageDirURL} = '/BackupPC';
- $Conf{CgiCSSFile} = 'BackupPC_stnd.css';
- CSS stylesheet "skin" for the CGI interface. It
is stored in the $Conf{CgiImageDir} directory and accessed via the
$Conf{CgiImageDirURL} URL.
For BackupPC v3.x several color, layout and font changes were made. The
previous v2.x version is available as BackupPC_stnd_orig.css, so if you
prefer the old skin, change this to BackupPC_stnd_orig.css.
- $Conf{CgiUserConfigEditEnable} = 1;
- Whether the user is allowed to edit their per-PC
config.
- $Conf{CgiUserConfigEdit} = { ...
- Which per-host config variables a non-admin user is allowed
to edit. Admin users can edit all per-host config variables, even if
disabled in this list.
SECURITY WARNING: Do not let users edit any of the Cmd config variables!
That's because a user could set a Cmd to a shell script of their choice
and it will be run as the BackupPC user. That script could do all sorts of
bad things.
Version Numbers¶
Starting with v1.4.0 BackupPC uses a X.Y.Z version numbering system, instead of
X.0Y. The first digit is for major new releases, the middle digit is for
significant feature releases and improvements (most of the releases have been
in this category), and the last digit is for bug fixes. You should think of
the old 1.00, 1.01, 1.02 and 1.03 as 1.0.0, 1.1.0, 1.2.0 and 1.3.0.
Additionally, patches might be made available. A patched version number is of
the form X.Y.ZplN (eg: 2.1.0pl2), where N is the patch level.
Author¶
Craig Barratt <cbarratt@users.sourceforge.net>
See <
http://backuppc.sourceforge.net>.
Copyright¶
Copyright (C) 2001-2009 Craig Barratt
Credits¶
Ryan Kucera contributed the directory navigation code and images for v1.5.0. He
contributed the first skeleton of BackupPC_restore. He also added a
significant revision to the CGI interface, including CSS tags, in v2.1.0, and
designed the BackupPC logo.
Xavier Nicollet, with additions from Guillaume Filion, added the
internationalization (i18n) support to the CGI interface for v2.0.0. Xavier
provided the French translation fr.pm, with additions from Guillaume.
Guillaume Filion wrote BackupPC_zipCreate and added the CGI support for zip
download, in addition to some CGI cleanup, for v1.5.0. Guillaume continues to
support fr.pm updates for each new version.
Josh Marshall implemented the Archive feature in v2.1.0.
Ludovic Drolez supports the BackupPC Debian package.
Javier Gonzalez provided the Spanish translation, es.pm for v2.0.0.
Manfred Herrmann provided the German translation, de.pm for v2.0.0. Manfred
continues to support de.pm updates for each new version, together with some
help from Ralph Passgang.
Lorenzo Cappelletti provided the Italian translation, it.pm for v2.1.0. Giuseppe
Iuculano and Vittorio Macchi updated it for 3.0.0.
Lieven Bridts provided the Dutch translation, nl.pm, for v2.1.0, with some
tweaks from Guus Houtzager, and updates for 3.0.0.
Reginaldo Ferreira provided the Portuguese Brazillian translation pt_br.pm for
v2.2.0.
Rich Duzenbury provided the RSS feed option to the CGI interface.
Jono Woodhouse from CapeSoft Software (www.capesoft.com) provided a new CSS skin
for 3.0.0 with several layout improvements. Sean Cameron (also from CapeSoft)
designed new and more compact file icons for 3.0.0.
Youlin Feng provided the Chinese translation for 3.1.0.
Karol 'Semper' Stelmaczonek provided the Polish translation for 3.1.0.
Jeremy Tietsort provided the host summary table sorting feature for 3.1.0.
Paul Mantz contributed the ftp Xfer method for 3.2.0.
Many people have reported bugs, made useful suggestions and helped with testing;
see the ChangeLog and the mailing lists.
Your name could appear here in the next version!
License¶
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 2 of the License, or (at your option) any later
version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License in the LICENSE
file along with this program; if not, write to the Free Software Foundation,
Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.