whitephoenix117

Members
  • Posts

    22
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

whitephoenix117's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I'm have all Docker's disabled on purpose. Any chance of disabling this error, or at least making it configurable? It ends up emailing me every time it runs.
  2. So I have an interesting diagnostic update. Home assistant is still collecting data from the APC UPS daemon, but the card on Unraid is broken. Meaning, Unaird is still communicating with the UPS.
  3. Restarting restored the proper functionality 🤷‍♂️ But after a day or so it stopped again, was able to repeat this behavior 2x
  4. I also lost mine in the dashboard recently, but it's showing as a USB device.
  5. I'm not sure if this is the correct place for a feature request. I would like to use this plugin to only backup my flash drive. It would be nice to have a global option to skip/disable docker & appdata. Current workaround point app-data to an empty directory turn each docker to "no" My only concern with this is that every time I install a new docker I need to remember to disable its backup.
  6. Due to the lack of encryption I would prefer not to use Unraid connect. I had tried using the appdata backup plugin but I couldn't find an option to leave all my docker containers running. I don't want to restart them every day, week, etc and I'm already backup up my appdata separately.
  7. I'm trying to make the flash drive readable by a backup app that not running as root. What am I missing? root@Tower:~# ls -la /boot/bzroot -rw------- 1 root root 30157108 Dec 1 09:49 /boot/bzroot root@Tower:~# chmod o+r /boot/bzroot root@Tower:~# ls -la /boot/bzroot -rw------- 1 root root 30157108 Dec 1 09:49 /boot/bzroot root@Tower:~#
  8. also having the same problem NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop2 0 0 1 0 /mnt/cache/system/docker/docker.img 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512
  9. Thanks! Sure enough enabling reconstruct write did improve the speed. The parity delay mentioned is not unique to ZFS but it I have never noticed it before so then: When I looked at write speeds the past all the disks happened to already be spun up The effect is somehow worse with ZFS?
  10. I added a new ZFS drive to my pool and started to move data over, its quite slow. Drive should be between 150-250 MB/s but its been hovering at +/- 50. There seems to be something bottlenecking a 1 or 2 single cores, screenshot below shows just 1 but sometimes its 2.
  11. It can, but using Syncthing saves me from wiregaurd, DDNS, port forwarding, etc on the remote client. It's also very simple to add multiple remote clients. I think the compromise here is to enable the trash can feature for Syncthing, I really only want to manage versioning in 1 place and, from what I can tell Duplicacy is far superior to syncthing in this regard. Enabling the "trash" will at least protect me from accidental deletion. Its not in my previous post but I also have nextcloud running inside the share, so I will have many layers of versioning Nextcloud version/ trash can Duplicacy replicates of above Syncthing "trash can"
  12. RAID is not a backup. Sync is not a backup. But what if I use both of them together Question is. Does this approach qualify as a 3-2-1 backup? Proposed Solution Unraid Array, single parity Share 1 (Critical Primary) Disk1 Disk2 etc Share 2 (Critical Backup) Disk3 Disk4 etc Share 3 (Non-Critical) All disks Implementation All data, applications, etc would write to Share 1. Duplicacy would then be utilized to creation versioned backups of the data between Primary --> Backup. Syncthing would be used to mirror Share 2 to an offsite location. Shares 1 & 2 would be protected by Dynamix File Integrity plug-in I think this would give me 3-2-1 3 copies Share 1, Share 2, offsite 2 physical disks disk 1/2 + disk 3/4 1 geo-redundant Offsite
  13. Just came across this in my search for a similar solution. While I haven't tried it, I think running 2 syncthing instances both pointing to different shares would work for both of our use cases. @avp2306 FYI Unraid array drives are readable outside the array Goals Maintain Unraid array flexibility compared /w ZFS (add/ remove drives, different sizes, etc) Protect against drive loss Protect against bit-rot Proposed Solution Shares 1 & 2 should be protected by Dynamix File Integrity plug-in Array Share 1 (Critical Primary) Disk1 Disk2 etc Share 2 (Critical Backup) Disk3 Disk4 etc Share 3 (Non-Critical) All disks All data, applications, etc would write to Share 1. Syncthing would then be utilized to mirror data between Primary --> Backup. This would also allow for file versioning utilizing sync things own service This would likely be acceptable for my use case as my "critical" data is maybe 10% of my total storage, so disks 3 + 4 in the backup would also be utilized to store other data in the array. Would love some feedback from the Unraid Experts!
  14. This! Can confirm I had the same issue, also in firefox. If you click cancel everything is good. Resend causes the bug Unraid 6.11.5
  15. I bought 5 refurbished ST16000NM001G drives, I'm per-clearing them all before adopting. One of them is finishing the final post-zero read extremely slowly. Wondering if I should trust this drive at all. Suspect Drive Successful Drive