can4d Posted February 9, 2023 Author Share Posted February 9, 2023 2 minutes ago, trurl said: Looks like disk2 has a little more appdata than disk1, probably because docker is enabled. Might be a good idea to wait on getting parity assigned until we can get appdata off those disks and onto a pool where it should have been to begin with. domains and system should be on a pool too. Still need to see the Compute... for those last 3 shares, especially system. Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 Just now, trurl said: Looks like disk2 has a little more appdata than disk1, probably because docker is enabled. Your Docker screenshot only had one started. Is that the only one that was started? Or did you stop some of the others? Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 1 minute ago, trurl said: Your Docker screenshot only had one started. Is that the only one that was started? Or did you stop some of the others? I stopped plex because I didn't want it writing anything. Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 Now that we have seen Docker page, disable Docker and VM Manager in Settings. Looks like appdata is the only thing a little off as far as mirroring. What do you get from command line with this? ls -lah /mnt/disk1/appdata and this? ls -lah /mnt/disk2/appdata and this? ls -lah /mnt/disk3/appdata Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 (edited) 21 hours ago, trurl said: Now that we have seen Docker page, disable Docker and VM Manager in Settings. Looks like appdata is the only thing a little off as far as mirroring. What do you get from command line with this? ls -lah /mnt/disk1/appdata and this? ls -lah /mnt/disk2/appdata and this? ls -lah /mnt/disk3/appdata Edited February 10, 2023 by can4d Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 What do you get from command line with this? du -h -d 1 /mnt/disk1/appdata and this? du -h -d 1 /mnt/disk2/appdata Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 Referring to your Docker page, it looks like clamav appdata was on a pool named cache_ssd. So your 2x500 pool would have been named cache_ssd. binhex-sonarr had a mapping to /mnt/disks/Random_Disk_II Was that the nvme as an Unassigned Device? Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 (edited) 21 hours ago, trurl said: What do you get from command line with this? du -h -d 1 /mnt/disk1/appdata and this? du -h -d 1 /mnt/disk2/appdata Edited February 10, 2023 by can4d Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 (edited) 4 minutes ago, trurl said: Referring to your Docker page, it looks like clamav appdata was on a pool named cache_ssd. So your 2x500 pool would have been named cache_ssd. binhex-sonarr had a mapping to /mnt/disks/Random_Disk_II Was that the nvme as an Unassigned Device? No the Random_II was an external usb (unassigned) Yes. I remember the cache_ssd (not really configuring with purpose) Edited February 9, 2023 by can4d Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 The only user share on the nvme was isos. isos was also on disk1 (and its mirror). No idea what that nvme pool was named unless it was referred to directly by one of your VMs. Do you know? I'll have some recommendations on how to better use the pools, but we'll just concentrate on getting things going as they were first. The latest User Shares Compute screenshot shows disk1 and disk2 appdata equal, and that agrees with the du results. Not sure why that earlier screenshot had slightly less on disk1. 5 minutes ago, can4d said: No the Random_II was an external usb (unassigned) I see it now, 5TB Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 16 minutes ago, trurl said: The only user share on the nvme was isos. isos was also on disk1 (and its mirror). No idea what that nvme pool was named unless it was referred to directly by one of your VMs. Do you know? I do not. my original thinking I believe was to pool the 2 500 and use nvme as parity. really wanting to run vms on ssd's with backup protection. I suppose I didn't achieve that. As far as pools I only remember the cache_ssd. Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 34 minutes ago, can4d said: use nvme as parity pools don't have parity. I guess you could put it in the same pool as the other 2 and you would get 1TB btrfs raid1 mirror instead of the 500G raid1 mirror they were. I would put it as a separate pool for all the default shares https://wiki.unraid.net/Manual/Shares#Default_Shares and just use the cache_ssd pool for caching. You would have to backup those default shares to the array, there are plugins for that. We can deal with that later. Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 Leave Docker and VM Manager disabled until further notice. New Config, assign only disk1, leave all others unassigned, start the array and post new diagnostics. Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 5 minutes ago, trurl said: Leave Docker and VM Manager disabled until further notice. New Config, assign only disk1, leave all others unassigned, start the array and post new diagnostics. done. tower-diagnostics-20230209-1609.zip Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 OK. New Config, assign other spinner as parity. Check parity valid box then start the array. Parity will probably be slightly out-of-sync so do a correcting parity check. That will also be a good test of how well things are working. Will take a several hours. Post new diagnostics if things don't seem to be working well. Should be no Errors in the Errors column on Main - Array Devices. You can examine your user shares if you want, but don't do a lot of reading and no writing until parity check completes. Some files are still on the unassigned SSDs of course. Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 6 minutes ago, trurl said: OK. New Config, assign other spinner as parity. "spinner" meaning the other 6TB drive that was not assigned as data correct? Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 Just now, can4d said: "spinner" meaning the other 6TB drive that was not assigned as data correct? correct Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 I have the menacing alert "all existing data will be overwritten when I start" for the parity (sdc) Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 If you hadn't clipped Array Operations off that screenshot I wouldn't have to ask. Are you running correcting parity check now? Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 not yet. I thought it did it since everything was green. So I click the check now? Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 Just now, can4d said: So I click the check now? yes Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 running now. roughly .2%. estimated completion of 8 hours. ugh. Quote Link to comment
trurl Posted February 9, 2023 Share Posted February 9, 2023 2 minutes ago, can4d said: estimated completion of 8 hours Better than I would expect for 6TB. Don't be surprised if it is longer. Quote Link to comment
can4d Posted February 9, 2023 Author Share Posted February 9, 2023 thank you very much for your guidance and support. I am starting to feel much calmer with glimpses at the end of a dark tunnel. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.