Ask HN: What is the current state of the art in BIG (>5TB) cloud backups?
I'm talking about greater than 5 TB in size. Rclone looks really good because I can just give it a bandwidth limit, point it at google drive and fire and forget. But I'm curious if that is the best way to do this? What does HN think?
> Their 1TB lifetime plan is very competitive though.
I know rsync.net has been around for a long time but I'm always very skeptical of lifetime plans, you'd have to use it ~10 years for it to be worth it. (Comparing with Hetzner's 1TB storage box)
If it’s important just use b2 or hetzner storage box. Use restic or rustic for backup and dedupe and encryption. I run this setup for home and work and we’re doing this on 10tb+.
How quickly do you need to be able to restore? Is it commercial or homelab?
The most cost-effective option by far would be to put a NAS device someplace offsite. You could use tailscale to connect to it remotely.
After that, depending on your access patterns, either a glacier-style s3 service (aws or backblaze/etc), or a rented bare-metal server with big disks some place inexpensive.
I was actually talking to my dad the other day. He asked me if there is a way for him to replicate his hard drive to me without touching cloud providers. The contents are family photos & videos, plus paperwork. I couldn't find a simple solution.
Syncthing will do a 1 to 1 connection if possible, else it will use a relay server. Traffic is encrypted. Open source. After the initial setup of marrying the devices together, it's just a matter of starting the application. Pretty much what you want?
I need to know what people are doing recently because most of the documentation I'm finding online is from 3-5 years ago and I want the most up-to-date information.
People are doing pretty much exactly what they were doing 3-5 years ago, software has been pretty good for a while no so there hasn't been much change.
It's just bigger than any amount I've ever dealt with in my whole life (I'm 30) and also it's hard to find solutions to manage it without running into bandwidth caps and also things are running for a very long time on the client side: days or weeks.
You can probably get away with google drive+rclone+borg/restic/whatever, but it will be rather clunky. Backblaze might be a nicer backend to use.
I use rsync.net with borg, but not sure about your budget. Their 1TB lifetime plan is very competitive though.
I know rsync.net has been around for a long time but I'm always very skeptical of lifetime plans, you'd have to use it ~10 years for it to be worth it. (Comparing with Hetzner's 1TB storage box)
How quickly do you need to be able to restore? Is it commercial or homelab?
The most cost-effective option by far would be to put a NAS device someplace offsite. You could use tailscale to connect to it remotely.
After that, depending on your access patterns, either a glacier-style s3 service (aws or backblaze/etc), or a rented bare-metal server with big disks some place inexpensive.
Or to put it another way, why is state of the art important?
I didn’t use rclone. I just used native AWS cli commands. But I’m an AWS guy and already had my own seldom used AWS account.
Restore takes from 12 (more expensive) to 48 hours (cheaper)