Dar is a command-line software aimed to backup and archiving large live filesystems.
It is a filesystem independent and
cross platform tool.
But, Dar is not a boot loader, nor it is an operating system.
It does not create nor format partitions, but it can restore a full
filesystem into a larger or a shorter partition, from one partition to
several ones, (or the opposite from several to one partition), from a filesystem type to another
filesystem type (ext2/3/4 to reiserFS for example).
- Saves all data and metadata
it can save and restore hard-linked of any inodes type (hard linked plain files,
sockets, char/block devices or even hard linked symlinks (!)),*
Solaris's Door files,
it takes care of Extended Attributes (Linux,
MaCOS X file forks
ACL (Linux, Solaris,...)
It can also detect and restore sparse files, even when the
underlying filesystem does not support them, leading
to an additional gain in backup space requirement, but mostly
at restoration time a disk space optimization that garanty you
always be able to restore you backup on a volume of same space
(which is not true for backup tools that ignore howsparse file
are stored on filesytems)
- Suitable for Live filesystem backup
Thanks to its ability to detect file change during backup it can
retry the backup of a particular file, but has also some mechanism
that let the user define some actions for before and after saving
a given type of file, before or after entering a given directory,
and so on. Such action can be a simple user script or a more complex
executable, there is no constraint.
- Embedded Compression
Of course backup can be compressed with a large variety of protocols
(gzip, bzip2, lzma/xz, lzo, zstd, lz4, and more to come), but
the compression is done per file, leading to a great backup file
robustness at the cost of unoticeable degradation of the compression
ratio. But doing let you tell dar which file to compress and
which one not trying to, saving a lot of CPU cycles.
- Embedded Encryption
Strong encryption is available with several well known and reputed
algorithms (blowfish, aes, twofish, serpent, camellia... but also
by mean of public/private keys integrating GPG encrytion and signing),
securing your data is not only a matter of ciphering algorithm, it is also
a matter of protection against code book/dictionnary attack. For that
purpose, when encryption is activated, data is floating inside
the archive at a random position, thanks to two elastic buffers
one added at the beginning of the archive, the other at the end. Last
a KDF function with salt and parametrable iteration count increase the
strength of the human provided key lead the encryption to use a different
key even when the human provide the same password/passphrase for two
- Cloud compatible backup tool
in addition to embedded encrytion, dar can directly use
SSH/SFTP or FTP to write and read your backup to a remote storage
(Cloud, NAS,...), without requiring any local storage. You can also
leverage the possibily to split a backup in files of given size
(called slices) to store your backup on removable storage
(tapes, disks,...) even low end Blue-Ray, DVD-RW, CD-RW, ...
or floppies (!) if you still have
them... In that context it may be interesting to also leverage the
easy integration of dar with Parchive to not only
detect corruption and prevent restoring a corrupted system unnoticed,
but also to repair your backup.
- Many backup flavors available
Dar can perform full backup1, incremental backup2,
differential backup3 and
It also records files that have been removed since the last backup
was made, leading the restoration of a system to get the exact same state
it was at the time of the differential/incremental/decremental backup (removing files that
ought to be removed, adding files that ought to be added and modifing files
- Binary Delta
For differential and incremental backups, you can also leverage the binary delta
which leads dar to create patch of large files when they change instead of saving them
all at once even if a few bytes changed (mailboxes, and so on.). A
filtering mechanism let you decide which file can be saved as patch when they change
and which one will always be saved as a whole when they change.
- Easy automation
User commands and scripts can be run from dar at each new slice
boundary, but also before and after saving some specified type of
files and directories. It also provides a
documented API and Python binding.
- Good quality software
Dar was born in 2002 and thanks to its modular source code
and highly abstracted datastructures, the many
features that were added since then, never lead the developper
to touch already existing features for that.
Modularity and abstraction are the two pilars of the dar/libdar
Dar is easy to use
While dar/libdar provide a lot of features we will not mention here,
you can use dar without having the knowledge of all of them. In its
most simple form, dar can be used only with few options, here follows
some example of use, that should not need additional explanations:
- Backing up all the
dar --create my_backup --fs-root / --go-into usr
- Restoration (restoring
/usr in a alternate directory):
dar --extract my_backup --fs-root /some/where/else
- Testing backup sanity:
dar --test my_backup
- Comparing a backup content with the existing filesystem:
dar --diff my_backup --fs-root /
Dar is well documented
A big effort has been made on documentation, but does not mean you have
to read it all to be able to use dar, as this one is very easy to use:
are covered by the tutorial
and for direct explanation of common questions by
Then, if you like or if you need, you can also look at the
detailed man pages
for a particular feature (These man documents are the
reference for each command-line tool you will get very
You may also find some help on the
where a bit more than a hundred of
subscribed users can help you.
Dar's documentation is big because it also includes all that may
be useful to know how to use libdar, which is intended for developers of
external application relying on this library. For those even more
curious there is also the documentation about dar's internals: libdar's
structure, archive format, which can ease the understanding of the
magic that makes all this working and gives a better understanding of
dar/libdar code, which is written in C++. But, no, you do not need to
read all this to just use dar! ;-)
Follows an abstracted list of features if
you want to know more about dar/libdar from high level point of view
Projects in alphabetical order:
is virtual file system layer for transparently accessing the content
of archives and remote directories just like local files.
script by Bob Rogers, creates and verifies a backup using dump/restore
or using dar
Baras by Aaron D.
Marasco it a rewriting in Perl of SaraB.
new in 2022:
Per Jensen, to automate and simplify the use of dar with redundancy,
remote backup, backup testing after transfer and many other interesting
features, like for example the backup definitions and logs management
by Dan A. Muresan is a framework for doing periodic DAR backups with
dar_fuse by !evil.
dar_fuse provides a faster AVFS equivalent thanks to its direct
use of libdar python API and fusepy module.
Darbup by Carlo Teubner.
One of darbup key features is its ability to automatically delete old
archives when the total space taken up by existing archives exceeds
some configured maximum
Jared Jennings, to back up a few
hundred gigabytes of data onto dozens of optical discs in a way that
it can be restored ten years later.
DarGUI by Malcolm Poole
is a front-end to dar providing simple and graphical access to the
main features of dar.
Disk archive interface for Emacs
by Stefan Reichör
gdar by Tobias Specht,
a graphical user interface to browse and extract dar archives
HUbackup (Home User
backup) by SivanGreen
kdar is a KDE-3
Graphical User Interface to dar made by Johnathan Burchill
Lazy Backup by Daniel
Johnson. Lazy Backup is intended to be so easy even lazy people will do
A Dar plugin has been made by Guus Jansman for
Schedule And Rotate Automatic Backups - by Tristan Rhodes.
SaraB works with DAR to
schedule and rotate backups. Supports the Towers of Hanoi,
Grandfather-Father-Son, or any custom backup rotation strategy.
If a project you like is missing, you are welcome to contact dar's
author for it to be referred here (contact coordinates can be found in
the AUTHOR file of the source package).
1 Full backup:
A full backup is a backup of a full filesystem or of a subset of files
where, for each file, the archive contains all the inode information
(ownership, permission, dates, etc.) file's data and eventually file's
2 Differential backup:
A differential backup is based on a full backup. It contains only the
data and Extended Attributes of files that changed since the full
backup was made. It also contains the list of files that have been
removed since the full backup was made. For files that did not change,
it contains only the inode information. The advantage is that the
backup process is much faster, the space required is also much
lower. The drawback is that you need to restore the full backup first,
then the differential backup to get the last saved state of your system.
But if you want the last version of a file that changed recently you only
need the last differential backup.
3 Incremental backup:
An incremental backup is essentially the same thing as a differential
backup. Some make a difference, I do not. The only point I see is that
the incremental backup is not based on a full backup but on a
differential backup or on another incremental one.
4 Decremental backup:
A decremental backup is a backup method in which the most recent backup
is a full backup, while the oldest backup are a difference compared to
that full backup. The advantage of such type of backup is the you can
restore easily your system in the last state it had using only the last
backup. And, if you want to restore it in the state it had some time before,
then you can restore the last backup (full backup), then the previous archive
(a decremental backup) and so on. As you most usually want to restore the
system in its last available state, this makes restoration much more
easy compared to doing incremental backups. However, this suffer from a
important drawback, which is that you need to transform the last backup
into a decremental backup when comes the time to make another backup.
Then you have to remove the former full backup and replace it by its