scot

Members
  • Posts

    9
  • Joined

  • Last visited

scot's Achievements

Noob

Noob (1/14)

2

Reputation

  1. After a ton of attempts and failures, just wanted to give up some basic advice. TLDR: Use a SLOG with ZFS to make TimeMachine work, may have some Mac based benefits as well. Issue: Using Time Machine over SMB writing to a ZFS based storage system works at first, but then starts to take longer and longer as time goes on. Eventually it gets so slow that a failure occurs. My guess: If you let it continue it will seemingly start trying to backup when a previous backup is ongoing causing a corruption in the backup. Your only option at that point is to delete and start over. This only happens when you have a long history, maybe a few months with a lot of changing data backed up and a large space dedicated (5tb in my case, failures start at around 2tb in stored space used). Initial solution: Tried every setting under the sun and eventually switched to mortizf's Time Machine Unraid app. This has its own dedicated samba system and worked initially (like every time I restarted my backups) but eventually failed. Increased system speed and added 2.5gbe ports, no effect. Then a thought occurred:: Time Machine over SMB forces Write Sync. This is important and relates to ZFS. If the issue is the system binding up, would this be the one place an SLOG device would actually help? Popped in an SLOG device, did it wrong (command line, causing Unraid heartburn) but ZFS worked and accepted the SLOG as its own. See another post I made about fixing Unraid's visual of the SLOG device. Backups went from ~11hrs per single daily update, down to about 20-30 minutes or less if doing hourly. This is just due to how ZFS works and inefficiencies in the overall system including SMB and the like, but it no longer gets hung up on the millions of small write and commit requests. I can't say for sure this will solve all of your issues, and probably won't make a huge difference otherwise, but in my case *everything* works faster from Mac, including directory listing. I have no idea why directory listing would involve a synchronous write, but... I cant argue the difference. In my case I am using a 6 drive array (4+2 raidz2) 20tb per disk, with a single nvme with 128gb in the SLOG. Eventually I will change that to a mirrored pair and only 64gb. I make no claims about XFS and other file systems or the fuse style file system or anything else. This is purely a ZFS based file system directly shared.
  2. When you say unassign, I am assuming you mean to go the pool devices on the "Main" tab, then select each drive one by one under identification, then select no device. Repeat for each one. When that is done go to the bottom and hit start under array operation. I attempted to do this, but start remains grey stating "missing cache disk", it does offer a checkbox to force proceeding. "Start will remove the missing cache disk and then bring the array on-line. Yes, I want to do this" Double checking since the terminology you are using isn't precisely matching the UI. Apologies.
  3. I believe I went over the drive limit with the addition of the SLOG due to the fact the "array" has to have a single drive in the system (usb) that I use for nothing else. So rather than try to work around that, I upgraded the license. I am pretty sure I can start the array, but your instruction made it seem like i should add the SLOG to the pool device list. I added it, but it gave me an error about wanting to expand the array, which i certainly don't want to do as it was working and the UI makes me believe it is trying to expand the space in the pool, not just recognize that there is a SLOG. I just want it as a slog which has an INCREDIBLE effect on speeding up time machine backups as they get larger. (probably about 90% drop in time to backup) tower-diagnostics-20240118-1050.zip
  4. pool: zfs id: 373615592619169231 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: zfs ONLINE raidz2-0 ONLINE sdg1 ONLINE sdh1 ONLINE ata-ST20000NM007D-3DJ103_ZVT8NRQ5 ONLINE sdc1 ONLINE sdd1 ONLINE logs ata-Samsung_SSD_850_EVO_250GB_S2R5NX0J542745K ONLINE tower-diagnostics-20240118-0845.zip
  5. Unraid is currently not able to start my array, too many drives error, though i've rebooted a few times since replacement and addition of a slog. It also doesn't show the replaced drive in the pool of devices. I have 5 drives in my array and 1 slog, plus a boot usb. I replaced via the standard cli, so maybe I screwed it up and it thinks I have 6 in the array? need to figure out how to get unraid to clean up and see the zfs pools as they exist. Any ideas on what I should do next?
  6. Others have confirmed #2 in other posts. It is a real issue after moving the images to zfs. Before manually moving those images it all works fine but you end up with the issues seen in #1. As for #1, it is not clearly spelled out and the UI does not help diagnose. I understand that something may be in release notes, but for a commercial product, UX is absolutely key. Everything must do what it seems like it should be doing, which is not true in this case. There are a ton of "my ui is locked" posts with similar issues and confusion about how information is stored and moving things around. As a new user it took me quite a while to figure out what is going on, the forum posts pointed at other various problem and I did read the notes, but couldn't pick up on the answer until digging in. I do understand the basic response of "but ZFS is for experts..." Fine, then lock it behind an expert area. If it is being used as an enticement from a marketing point of view, it must be supported and logical for a released product. This particular area needs enhancement to stave off confusion and that could be done with a very simple series of UI changes or notifications and documentation.
  7. A few issues that are fairly critical but technical users may work around. Realistically these really need solutions for the regular unraid person experimenting with zfs. First, creating a 6.12 RC5 system using a zfs raid as primary storage, it is suggested to use a second usb key as a starting point to get the array running. This does work, but what is not clear to the user is that this will be the storage location for Dockers/VM Images, even if you go in to select the zfs drive as the location for appdata etc after the unraid system starts. It does not automatically move the information from the usb key to ZFS and new downloaded images still seem to be created on the usb key, which is invariably going to be extremely slow. This results in the UI becoming completely unresponsive during installs of docker images, startng and stopping VMs and more. Without clear easy to see reasons why. Easy solutions should be available or at least documented around to tell the user how to officially get everything into zfs. As per other posts this will probably be a non-issue in 6.13, but we are not there yet and this should be dealt with now and not wait IMO. Second: ZFS use as a storehouse for docker images/containers breaks shutdown/reboot/Stopping of the array. Cause is simple, if you stop the array the dockers will not be shut down until the zfs subsystem is turned off, but zfs is busy due to the dockers etc so it cannot be stopped. The order of operations needs to be corrected to stop the dockers and VMs first, then the zfs system. Third: ZFS is using logical drives as part of its assignment. it is MUCH safer to use disk-by-id to assign drive info. (so a device hw GUID vs /dev/sba) so then if you remove a drive and put it back it ALWAYS gets the correct information and it is easier to deal with failures etc. sdf1 ONLINE 0 0 0 sdg1 ONLINE 0 0 0 VS: ata-Hitachi_HDS722020ALA330_[..] ONLINE 0 0 0 ata-Hitachi_HDS722020ALA330_[..] ONLINE 0 0 0
  8. Same issue here. If all of the dockers are stopped, it works properly.