Bartłomiej pointed out that we somehow stopped using the mailing list
for discussion so here we go.
Currently we only create backups on vostok using borg. A possibly quite
big problem with this is that if an attacker gains access to a server,
they also have sufficient access to remove all backups of that server.
We could restrict that in borg, but then borg wouldn't be able to clean
up old backups regularly and our repos would grow forever. A better
solution is to create backups of the backups in a way that the front end
servers can not delete any of these secondary backups.
Possible solutions include:
- Creation of a second layer of backups (backups of the backups) on
vostok. This roughly doubles our space requirement and we are
currently at 44% usage so this won't work well/for long. Unless we
can use file system level snapshots or similar for this to reduce the
required space, it's out.
- Put the secondary backups on a different, possibly new, machine using
borg. The backup would be created on vostok from the existing backup
- Put them on AWS glacier. Roughly 4€/month and TB; suggested by Tyler
from Arch Linux 32
Using glacier would require that we export tarballs (supported by borg)
and then upload them. However, since the backups are encrypted and
vostok is not supposed to be able to read them, the tarballs need to be
created on and uploaded from the servers themselves. This may become a
cpu/bandwidth/traffic concern if done often. Tyler is currently
investigating this for Arch Linux 32's backups AFAIK.
Does anyone have other ideas what we could do here to ensure that we
have backups of the backups? The most important requirements are that no
matter which server an attacker manages to get access to, they cannot
read any user data from other servers and that they cannot remove all
backups from that server alone.