Dar Documentation


DAR's Limitations





Here follows a description of the known limitation you should consult before creating a bug report for dar:

Fixed Limits

    • The size of SLICES may be limited by the file system or kernel (maximum file size is 2 GB with Linux kernel 2.2.x), other limits may exist depending on the filesystem used.
    • the number of SLICES is only limited by the size of the filenames, thus using a basename of 10 chars, considering your file system can support 256 char per filename at most, you could already get up to  10^241 SLICES, 1 followed by 241 zero. But as soon as your file system will support bigger files or longer filename, dar will follow without change.
    • dar_manager can gather up to 65534 different backups, not more. This limit should be high enough to not be a problem.
    • when using a listing file to define which file to operate on, each line of the listing file must not be longer than 20480 bytes else a new line is considered after the 20480th byte.

System variable limits

Memory

Dar uses virtual memory (= RAM+swap) to be able to add the list of file saved at the end of each archive. Dar uses its own integer type (called "infinint") that do not have limit (unlike 32 bits or 64 bits integers). This makes dar already able to manage Zettabytes volumes and above even if the systems cannot yet manage such file sizes. Nevertheless, this has an overhead with memory and  CPU usage, added to the C++ overhead for the datastructure. All together dar needs a average of 650 bytes of virtual memory by saved file with dar-2.1.0 and around 850 with dar-2.4.x (that's the price to pay for new features). Thus, for example if you have 110,000 files to save, whatever is the total amount of data to save, dar will require around 90 MB of virtual memory.

Now, when doing catalogue extraction or differential backup, dar has in memory two catalogues, thus the amount of memory space needed is the double (180 MB in the example). Why ? Because for differential backup, dar starts with the catalogue of the archive of reference which is needed to know which files to save and which not to save, and in another hand, builds the catalogue of the new archive all along the process. Now, for catalogue extraction, the process is equivalent to making a differential backup just after a full backup.

As you guess merging two archives into a third one requires even more memory (memory to store the first archive to merge, the second archive to merge and the resulting archive to produce).

This memory issue, is not a limit by itself, but you need enough virtual memory to be able to save your data (if necessary you can still add swap space, as partition or as a plain file).

Integers

To overcome the previously explained memory issue, dar can be built in an other mode. In this other mode, "infinint" is replaced by 32 bits or 64 bits integers, as defined by the use of --enable-mode=32 or --enable-mode=64 options given to configure script. The executables built this way (dar, dar_xform, dar_slave and dar_manager) run faster and use much less memory than the "full" versions using "infinint". But yes, there are drawbacks:  slice size, file size, dates, number of files to backup, total archive size (sum of all slices), etc, are bounded by the maximum value of the used integer, which is 4,294,967,296 for 32 bits and 18,446,744,073,709,551,616 for 64 bits integers.  In clear the 32 bits version cannot handle dates after year 2106 and file sizes over 4 GB. While the 64 bits version cannot handle dates after around 500 billion years (which is longer than the estimated age of the Universe: 15 billion years) and file larger than around 18 EB (18 exa bytes).

What the comportment when such a limit is reached ? For compatibility with the rest of the code, limited length integers (32 or 64 for now) cannot be used as-is, they are enclosed in a C++ class, which will report overflow in arithmetic operations. Archives generated with all the different version of dar will stay compatible between them, but the 32 bits or 64 bits will not be able to read or produce all possible archives. In that case, dar suite program will abort with an error message asking you to use the "full" version of dar program.

Command line

On several systems, command-line long options are not available. This is due to the fact that dar relies on GNU getopt. Systems like FreeBSD do not have by default GNU getopt, instead the getopt function proposed from the standard library does not support long options, nor optional arguments. On such system you will have to use short options only, and to overcome the lack of optional argument you need to explicitly set the argument. For example in place of "-z" use "-z 9" and so on (see dar's man page section "EXPLICIT OPTIONAL ARGUMENTS"). All options for dar's features are available with FreeBSD's getopt, just using short options and explicit arguments.

Else you can install GNU getopt as a separated library called libgnugetopt. If the include file <getopt.h> is also available, the configure script will detect it and use this library. This way you can have long options on FreeBSD for example.

Another point concerns the comand line length limitation. All system (correct me if I am wrong) do limit the size of the command line. If you want to add more options to dar than your system can afford, you can use the -B option instead an put all dar's arguments (or just some of them) in the file pointed to by this -B option. -B can be used several times on command line and is recursive (you can use -B from a file read by -B option).

Dates

Unix files have three dates :
  • last modification date (mtime)
  • last access date (atime)
  • last inode change (ctime)
In dar, dates are stored as integers (the number of seconds elapsed since Jan 1st, 1970) as Unix systems do. As seen above, the limitation is not due to dar but on the integer used, so if you use infinint, you should be able to store any date as far in the future as you want. Of course dar cannot stores dates before Jan the 1st of 1970, but it should not be a very big problem. ;-)

There is no standard way under Unix to change the ctime. So Dar is not able to restore the ctime date of files.

Symlinks

Unix systems do not provide a way to modify the last modification date (aka mtime) of an existing symlink.Worse If you modify the mtime of an existing symlink, you end modifying the mtime of the file targeted by that symlink, keeping untouched the mtime of the symlink itself! For that reason, dar is not able to restore mtime of symlink, it does not even try to do this, to not mess the mtime of inodes that could be pointed by a symlink.