1unraid_user

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by 1unraid_user

  1. I can confirm this. It is in deed a huge pain, as I run my UniFi Controller in a docker and require the communication with it from other dockers.
  2. Some months ago my flash drive got corrupt and I wonder ever since what the best strategy is. I use the unraid.net flash backup, as well as a versioned local backup, however this doesn‘t seem a completely valid backup strategy. When my flash drive got corrupted, my backup started backing up the corrupted files, which made unraid.net backup worthless. I just found out by accident that my drive was corrupt and that something was wrong, when the server didn’t reboot. The first corrupted files were months old at this point, and I had no backup that old anymore. I was able to put together a valid flash drive by manually identifying corrupted files and replacing them with the original file. It was mostly luck, that no „individualized“ files were corrupt. I wonder how this could be avoided. To be fully sure the corrupted files are not backed up, I would need a weekly job that searches for corrupted files. Only if nothing was found, a backup should be done. If corrupted files are found, the last „clean“ backup should be used to create a new flash drive. I have not read about these issues so far. Is this problem common?
  3. Hi @JoeUnraidUser, I am still using your script as part of my backup routine, but I end up with a lot of backups meanwhile. Do you have a way to delete the oldest files as soon as a certain amount of backups is reached (e.g. only keep 10 backups per Docker and then delete the oldest)
  4. I got the docker to run, however I am failing on passing though the RF-USB-2. In case you have problems, check my attached screenshot on my docker-configuration. Regarding the RF-USB-2 it's possible different to a VM the driver actually has to be enabled on host-level. It's weird, as I can pass through the device to the VM without a problem, but it might be the device is not initialised completely on host-level, which then becomes a problem in the docker. In case anyone else is trying this, let me know.
  5. Hi everyone, did anyone figure out a solution by now? I am still looking into this.
  6. So by now I could only solve this by setting up a VM. It's not the worst solution, but a docker would definitely be sexier, slimmer and easier to backup.
  7. Sine v3.55.10. Raspberrymatic also offers the installation via docker. I saw some people setting it up as VM, but docker would be the preferred way for me, as it's much lighter. Has anyone made it run in unraid? This Github wiki explains it a little. However, I'm a bit afraid of just using these commands in my productive UNRAID. Is it safe to use? Does anyone maybe already has it running in docker? Unfortunately I have 0 experience in setting up dockers without just clicking "install" in the community applications 😅
  8. Did anyone managed to run it without the array spinning up? I use a cached share as backup destination as well as Dynamix Folder Cache plugin. However, my array still spins up every time I start a backup job By the way: is it on purpose that the cache folder for the plugin is not beeing deleted after finishing? Looks to me like "appdata" in the backup could be deleted afterwards, as we have the .tgz
  9. Fixed it by running chmod u+x backupDockers.sh I tried to run the command partially with "sudo" before, because I already assumed it might be a rights problem and I thought I can work around it with sudo. Turns out however, sudo was part of the problem. Don't understand it though but it's running now
  10. Hi @JoeUnraidUser I just tried your script (which is exactly what I am looking for), but the system gives me this error message backupDockers.sh: line 52: syntax error near unexpected token `<' backupDockers.sh: line 52: ` readarray -t all < <(printf '%s\n' "$(docker ps -a --format "{{.Names}}" | sort -fV)")' I tried inserting as text and then downloaded the provided file. Buth without success. As this post is ranked #1 in google for unraid docker backups, I think fixing this would probably be a big gain for the community. (Maybe I am the problem).