Clarification on rotation/retention of backup

Hi,

I think I need a clarification of the operation of the forget command. I will keep the last 7 days of backup, the last 4 weeks and the last month, so I put the following command in a cron after the backup:

restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 1 --prune

if during the first week it worked well, when the 2nd there is a pb because a backup was deleted that should have been kept (it seems to me).

$ restic snapshots
repository 7e6f1894 opened successfully, password is correct
ID        Date                 Host        Tags        Directory
----------------------------------------------------------------------
60c2e53d  2018-06-04 21:31:45  ct17112501              /vzdump_excluded
b8ece11f  2018-06-10 19:00:06  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
2d4df84b  2018-06-13 19:00:01  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
6c885246  2018-06-14 19:00:02  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
ced1a6cf  2018-06-15 19:00:02  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
c9379273  2018-06-16 19:00:01  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
0891686f  2018-06-17 19:00:01  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
a2b1f9ed  2018-06-18 19:00:01  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
21a77081  2018-06-19 19:00:02  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
----------------------------------------------------------------------
9 snapshots

it is missing the backup of 11/06

Would have misunderstood how the β€˜forget’ command works?

sorry I have not well seen the snapshots remaining, in fact I’m 2 (04/06 and 10/06) of week 23 and I miss it 2 of the week 24 (11/06 12/06 )

The weeks begin what day in rest?

The week ends with sunday and starts with monday.

Restic kept 7 snapshots from a single day and 1 weekly snapshot. Snapshot 60c2e53d has a different path than the other snapshots and thus is treated different.

Does this mean that snapshot 60c2e53d will never be pruned ?

And that each time the source will be modified the snapshots will be kept ?

If there are no other snapshots with the same path (and host), then no, this snapshot will not get pruned.

You can use --tag to group snapshots that use a different path. See the documentation for more information on that topic.

You can configure how restic groups snapshots together, by using restic forget --dry-run --group-by host it will group together snapshots of different directories and apply the policy together.

Make plenty use of --dry-run while developing retention policies :slight_smile:

Did you read the whole section on forgetting backups in the manual yet? Removing backup snapshots β€” restic 0.16.3 documentation

Is there anything unclear or missing?

1 Like

In fact I misunderstood the options group-by in the doc (and not read the description of this option in the help: restic forget --help). Now everything is clear sorry for the noise.

Cool, great we could clear it up. We’re always interested in improving the docs, so if you have an idea on how to do that please let us know!

Ok, maybe we could replace the following paragraph:

The grouping options can be set with --group-by, to only group snapshots by paths and tags use --group-by paths,tags. The policy is then applied to each group of snapshots separately. This is a safety feature.

by

Grouping snapshots can be defined with the β€˜β€“group-by’ option. This option accepts the following keywords β€˜host’, β€˜paths’ and β€˜tags’ that can be combined using the comma separator. For example: to select only snapshots with the same paths and tags: β€œβ€“group-by paths, tags”. The policy is then applied to each snapshot group separately. It is a safety device.

my 2 cents.

Hi,

I come back on the subject because it seems to me there is a problem in my config because today it has been pruned a snapshot of too much.

In my opinion the snapshot of 2018/06/10 should have been kept you do not think (the list of snapshots of the last week is on the top of this thread)?

$ restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 1 --prune --host ct17112501

Applying Policy: keep the last 7 daily, 4 weekly, 1 monthly snapshots
snapshots for (host [ct17112501], paths [/etc, /var/log, /vzdump_excluded]):

keep 9 snapshots:
ID        Date                 Host        Tags        Directory
----------------------------------------------------------------------
0891686f  2018-06-17 19:00:01  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
cced8d99  2018-06-24 19:00:03  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
ee8e41fe  2018-06-26 19:00:02  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
a354ed08  2018-06-27 19:00:02  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
26b03078  2018-06-28 19:00:01  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
a92359d5  2018-06-29 19:00:01  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
644f8c98  2018-06-30 19:00:02  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
3bd99e75  2018-07-01 19:00:01  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
610e1530  2018-07-02 19:00:01  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
----------------------------------------------------------------------
9 snapshots

remove 2 snapshots:
ID        Date                 Host        Tags        Directory
----------------------------------------------------------------------
b8ece11f  2018-06-10 19:00:06  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
f6b5f945  2018-06-25 19:00:02  ct17112501          β”Œβ”€β”€ /etc
                                                   β”‚   /var/log
                                                   └── /vzdump_excluded
----------------------------------------------------------------------
2 snapshots

snapshots for (host [ct17112501], paths [/vzdump_excluded]):

keep 1 snapshots:
ID        Date                 Host        Tags        Directory
----------------------------------------------------------------------
60c2e53d  2018-06-04 21:31:45  ct17112501              /vzdump_excluded
----------------------------------------------------------------------
1 snapshots

2 snapshots have been removed, running prune
counting files in repo
building new index for repo
[0:03] 100.00%  100 / 100 packs

repository contains 100 packs (37663 blobs) with 377.055 MiB
processed 37663 blobs: 0 duplicate blobs, 0B duplicate
load all snapshots
find data that is still in use for 10 snapshots
[0:16] 100.00%  10 / 10 snapshots

found 34014 of 37663 data blobs still in use, removing 3649 blobs
will remove 0 invalid files
will delete 1 packs and rewrite 9 packs, this frees 19.748 MiB
[0:02] 100.00%  9 / 9 packs rewritten

counting files in repo
[0:01] 100.00%  92 / 92 packs

finding old index files
saved new indexes as [c9294d83]
remove 3 old index files
[0:00] 100.00%  10 / 10 packs deleted

done

Regards.

You are right, that looks strange. From my understanding restic should have kept snapshot b8ece11f.

I’ve reproduced it locally, it behaves the same for me. Internally, restic uses time.ISOWeek() to get the week number, which in turn implements the ISO 8601 standard for week numbers. It defines that a week starts on a Monday.

The reasons for keeping the snapshots are as follows, starting with counters daily=7, weekly=4, and monthly=1:

  • 2018-07-02 is kept because it’s a daily, weekly and monthly snapshot (counters daily=6, weekly=3, monthly=0), this date starts a new week (calendar week 27) because it’s a Monday
  • 2018-07-01 is kept because it’s a daily and a weekly snapshot (counters daily=5, weekly=2, monthly=0), it’s a Sunday so it’s a different week as 2018-07-02 (calendar week 26)
  • 2018-06-30 to 2018-06-26 are kept because they are daily snapshots (counters daily=0, weekly=2, monthly=0), all within calendar week 26
  • 2018-06-24 is kept because it’s a weekly snapshot in a new week (calendar week 25) (counters daily=0, weekly=1, monthly=0)
  • 2018-06-17 is kept because it’s a weekly snapshot in a new week (calendar week 24 (counters are all zero here)

At this point, restic decides that all other snapshots (including the one on 2018-06-10) should be forgotten.

For the calendar weeks, you can use cal as follows to show them e.g. for June and July:

$ cal --monday --week 6 2018
       Juni 2018
   Mo Di Mi Do Fr Sa So
22              1  2  3
23  4  5  6  7  8  9 10
24 11 12 13 14 15 16 17
25 18 19 20 21 22 23 24
26 25 26 27 28 29 30

$ cal --monday --week 7 2018
       Juli 2018
   Mo Di Mi Do Fr Sa So
26                    1
27  2  3  4  5  6  7  8
28  9 10 11 12 13 14 15
29 16 17 18 19 20 21 22
30 23 24 25 26 27 28 29
31 30 31

I got this information by fiddling around with the source code and including more debug output, but there’s an issue about adding some output which explains to the user why restic keeps/forgets snapshots: https://github.com/restic/restic/issues/1235

2 Likes

I’ve implemented this here: https://github.com/restic/restic/pull/1876, please let me know what you think

Sample run:

$ ./restic forget --explain --dry-run --keep-daily 7 --keep-weekly 4 --keep-monthly 1

repository 559c5ffa opened successfully, password is correct
Applying Policy: keep the last 7 daily, 4 weekly, 1 monthly snapshots
snapshots for (host [mopped], paths [/home/fd0/work/restic]):
keep 9 snapshots:
ID        Date                 Host        Tags        Directory
----------------------------------------------------------------------
b06c68f1  2018-06-17 19:00:00  mopped                  /home/fd0/work/restic
                    daily snapshot
                    weekly snapshot
                    monthly snapshot

93636576  2018-06-24 19:00:00  mopped                  /home/fd0/work/restic
                    daily snapshot
                    weekly snapshot

948e8801  2018-06-26 19:00:00  mopped                  /home/fd0/work/restic
                    daily snapshot

058e4f60  2018-06-27 19:00:00  mopped                  /home/fd0/work/restic
                    daily snapshot

13023ad3  2018-06-28 19:00:00  mopped                  /home/fd0/work/restic
                    daily snapshot

b0870185  2018-06-29 19:00:00  mopped                  /home/fd0/work/restic
                    daily snapshot

7c333b49  2018-06-30 19:00:00  mopped                  /home/fd0/work/restic
                    daily snapshot

6d965b57  2018-07-01 19:00:00  mopped                  /home/fd0/work/restic
                    weekly snapshot

f5dc9c02  2018-07-02 19:00:00  mopped                  /home/fd0/work/restic
                    weekly snapshot
----------------------------------------------------------------------
9 snapshots
remove 2 snapshots:
ID        Date                 Host        Tags        Directory
----------------------------------------------------------------------
8f40d1f8  2018-06-10 19:00:00  mopped                  /home/fd0/work/restic
376c64d4  2018-06-25 19:00:00  mopped                  /home/fd0/work/restic
----------------------------------------------------------------------
2 snapshots
1 Like

Thank you for the time spent on my question.

More details on the snaphost is always a good thing.

I also have a remark on how to count days, weeks and months. If I understand the logic if I want to have 4 weeks of backup (in addition to the current week) I must ask to restic to keep 5.
However, my usual way of counting backups is to count the retention periods in addition to the current period.
That’s what I thought I misunderstood.

1 Like

I thought so too. I probably should have read the last paragraph in the documentation instead of guessing. :face_with_monocle:

Hi,

Has the β€˜β€“explain’ feature been released? I’m having some trouble understanding restic’s snapshot β€˜forget’ behavior and the additional annotation from --explain would be a big help.

Thanks.

It’s not contained in any release yet, but you could try one of the beta binaries (or build restic yourself from the code): https://beta.restic.net/restic-v0.9.2-110-g6bc99ce4/

The release you identified doesn’t understand --explain because displaying the reason became the default behavior. (I dug that out of the github history). So, with this beta, it unconditionally displays the reason:

/restic_v0.9.2-110-g6bc99ce4_linux_amd64 forget --dry-run --tag systemd.timer --keep-daily 7 --keep-weekly 8 --keep-monthly 18 --keep-yearly 3
repository ae6fa102 opened successfully, password is correct
found 1 old cache directories in /root/.cache/restic, pass --cleanup-cache to remove them
Applying Policy: keep the last 7 daily, 8 weekly, 18 monthly, 3 yearly snapshots
snapshots for (host [canopus], paths [/, /home]):

keep 8 snapshots:
ID        Time                 Host        Tags           Reasons           Paths
---------------------------------------------------------------------------------
173c465c  2018-07-29 07:00:27  canopus     systemd.timer  weekly snapshot   /
                                                          monthly snapshot  /home

f39c5248  2018-08-05 07:00:43  canopus     systemd.timer  daily snapshot    /
                                                          weekly snapshot   /home

a96f06c6  2018-08-12 07:00:05  canopus     systemd.timer  daily snapshot    /
                                                          weekly snapshot   /home

e0384214  2018-08-19 07:00:09  canopus     systemd.timer  daily snapshot    /
                                                          weekly snapshot   /home
                                                          monthly snapshot

2032865c  2018-09-02 07:00:16  canopus     systemd.timer  daily snapshot    /
                                                          weekly snapshot   /home

7cde572a  2018-09-09 07:00:28  canopus     systemd.timer  daily snapshot    /
                                                          weekly snapshot   /home

10693b01  2018-09-16 07:00:02  canopus     systemd.timer  daily snapshot    /
                                                          weekly snapshot   /home

c78efc81  2018-09-23 07:00:04  canopus     systemd.timer  daily snapshot    /
                                                          weekly snapshot   /home
                                                          monthly snapshot
                                                          yearly snapshot
---------------------------------------------------------------------------------
8 snapshots

Output looks good! I’ll run this for a while to double check the retention policies on my systems.

Thanks!

1 Like

Oh, sorry, I forgot to mention that. I hope this answered your questions :wink:

Thanks for the feedback!

Hi,
I have a comment about removing snapshots. In the example below I understand perfectly why the snapshot c74a4169 was removed, but since it was the oldest snapshot and the last monthly snapshot is not done yet I think it would have been better to keep it , do not you think ?

Applying Policy: keep the last 7 daily, 5 weekly, 3 monthly snapshots
snapshots for (host [ct17102601], paths [/]):

keep 10 snapshots:
ID        Date                 Host        Tags        Directory
----------------------------------------------------------------------
058b749f  2018-09-23 19:15:02  ct17102601              /
d2ac5901  2018-09-30 19:00:01  ct17102601              /
9911ee71  2018-10-07 19:29:01  ct17102601              /
89cfce01  2018-10-09 19:00:02  ct17102601              /
0c230b68  2018-10-10 19:02:01  ct17102601              /
06136e65  2018-10-11 19:09:01  ct17102601              /
0d5b3064  2018-10-12 19:02:01  ct17102601              /
3c4f425f  2018-10-13 19:21:02  ct17102601              /
a05fad00  2018-10-14 19:08:02  ct17102601              /
525c23be  2018-10-15 19:25:01  ct17102601              /
----------------------------------------------------------------------
10 snapshots

remove 2 snapshots:
ID        Date                 Host        Tags        Directory
----------------------------------------------------------------------
c74a4169  2018-09-16 19:06:02  ct17102601              /
98aff015  2018-10-08 19:22:01  ct17102601              /
----------------------------------------------------------------------
2 snapshots

2 snapshots have been removed, running prune
counting files in repo
building new index for repo
[11:25] 100.00%  10895 / 10895 packs

repository contains 10895 packs (776932 blobs) with 52.474 GiB
processed 776932 blobs: 0 duplicate blobs, 0B duplicate
load all snapshots
find data that is still in use for 10 snapshots
[4:11] 100.00%  10 / 10 snapshots

found 769421 of 776932 data blobs still in use, removing 7511 blobs
will remove 0 invalid files
will delete 113 packs and rewrite 876 packs, this frees 1.650 GiB
[9:48] 100.00%  876 / 876 packs rewritten

counting files in repo
[1:39] 100.00%  10626 / 10626 packs

finding old index files
saved new indexes as [a11fdf20 0d46097f 4a405a59 590b0f8f]
remove 9 old index files
[0:27] 100.00%  989 / 989 packs deleted

done

Regards