lazant

Members
  • Posts

    41
  • Joined

  • Last visited

Everything posted by lazant

  1. Thanks for posting this, I'd love to get this running too. Going to try this weekend. Any tips after running it for awhile? Still "botchy"?
  2. Thanks Jorge, all I needed to do was delete the dataset with the VMs on the old pool and I was good to go off the new pool. I'm curious how unraid handles there being duplicate file names on two different drives across one share? When I sent the VM dataset to the new drive using zfs send it looked like this: cache_nvme/domains/Abbay cache_nvme_vm/domains/Abbay
  3. I'm still new to zfs so this took me awhile to read up on and get the confidence to do but I've moved the nested datasets with the VMs from my old cache to my new one using: zfs send -wR cache_nvme/domains/Abbay@2023-09-24-193824 | zfs receive -Fdu cache_nvme_vm zfs send -wR cache_nvme/domains/Windows\ 10\ Pro@2023-10-20-102957 | zfs receive -Fdu cache_nvme_vm zfs send -wR cache_nvme/domains/Windows\ 11\ Pro@2023-10-20-103005 | zfs receive -Fdu cache_nvme_vm Here is my new dataset structure: NAME USED AVAIL REFER MOUNTPOINT cache_nvme 345G 6.98T 104K /mnt/cache_nvme cache_nvme/domains 345G 6.98T 112K /mnt/cache_nvme/domains cache_nvme/domains@2023-09-20-160630 64K - 112K - cache_nvme/domains/Abbay 145G 6.98T 100G /mnt/cache_nvme/domains/Abbay cache_nvme/domains/Abbay@2023-09-20-160133 2.59G - 100G - cache_nvme/domains/Abbay@2023-09-20-170610 2.13G - 100G - cache_nvme/domains/Abbay@2023-09-20-222020 3.80G - 100G - cache_nvme/domains/Abbay@2023-09-24-193824 8.49G - 100G - cache_nvme/domains/Abbay@2023-10-19-214718 2.50G - 100G - cache_nvme/domains/Windows 10 Pro 100G 6.98T 100G /mnt/cache_nvme/domains/Windows 10 Pro cache_nvme/domains/Windows 10 Pro@2023-09-20-160535 56K - 100G - cache_nvme/domains/Windows 10 Pro@2023-10-20-102957 0B - 100G - cache_nvme/domains/Windows 11 Pro 100G 6.98T 100G /mnt/cache_nvme/domains/Windows 11 Pro cache_nvme/domains/Windows 11 Pro@2023-09-20-160541 56K - 100G - cache_nvme/domains/Windows 11 Pro@2023-10-20-103005 0B - 100G - cache_nvme_vm 342G 3.27T 104K /mnt/cache_nvme_vm cache_nvme_vm/domains 342G 3.27T 96K /mnt/cache_nvme_vm/domains cache_nvme_vm/domains/Abbay 142G 3.27T 100G /mnt/cache_nvme_vm/domains/Abbay cache_nvme_vm/domains/Abbay@2023-09-20-160133 2.59G - 100G - cache_nvme_vm/domains/Abbay@2023-09-20-170610 2.13G - 100G - cache_nvme_vm/domains/Abbay@2023-09-20-222020 3.80G - 100G - cache_nvme_vm/domains/Abbay@2023-09-24-193824 8.49G - 100G - cache_nvme_vm/domains/Abbay@2023-10-19-214718 0B - 100G - cache_nvme_vm/domains/Windows 10 Pro 100G 3.27T 100G /mnt/cache_nvme_vm/domains/Windows 10 Pro cache_nvme_vm/domains/Windows 10 Pro@2023-09-20-160535 56K - 100G - cache_nvme_vm/domains/Windows 10 Pro@2023-10-20-102957 0B - 100G - cache_nvme_vm/domains/Windows 11 Pro 100G 3.27T 100G /mnt/cache_nvme_vm/domains/Windows 11 Pro cache_nvme_vm/domains/Windows 11 Pro@2023-09-20-160541 56K - 100G - cache_nvme_vm/domains/Windows 11 Pro@2023-10-20-103005 0B - 100G - cache_ssd 6.49G 7.12T 120K /mnt/cache_ssd cache_ssd/appdata 214M 7.12T 112K /mnt/cache_ssd/appdata cache_ssd/appdata@2023-09-20-160600 0B - 112K - cache_ssd/appdata/DiskSpeed 8.56M 7.12T 8.56M /mnt/cache_ssd/appdata/DiskSpeed cache_ssd/appdata/DiskSpeed@2023-09-20-160600 0B - 8.56M - cache_ssd/appdata/firefox 205M 7.12T 205M /mnt/cache_ssd/appdata/firefox cache_ssd/appdata/firefox@2023-09-20-160600 0B - 205M - cache_ssd/asteriabackup 436K 7.12T 436K /mnt/cache_ssd/asteriabackup cache_ssd/system 6.15G 7.12T 2.65G /mnt/cache_ssd/system cache_ssd/system@2023-09-20-160606 3.50G - 5.75G - disk1 4.37T 11.9T 112K /mnt/disk1 disk1/asteriabackup 4.28T 11.9T 4.28T /mnt/disk1/asteriabackup disk1/backup 70.2G 11.9T 70.2G /mnt/disk1/backup disk1/isos 15.7G 11.9T 15.7G /mnt/disk1/isos disk2 13.9M 16.2T 96K /mnt/disk2 disk3 7.71T 8.53T 7.71T /mnt/disk3 disk4 13.5T 2.77T 96K /mnt/disk4 disk4/TV Shows 13.5T 2.77T 13.5T /mnt/disk4/TV Shows disk5 9.70T 6.53T 104K /mnt/disk5 disk5/TV Shows 9.70T 6.53T 9.70T /mnt/disk5/TV Shows My question now is how to go about switching the domains share from cache_nvme/domains to cache_nvme_vm/domains? Thanks again for your help.
  4. Thanks Jorge, I'm away right now but will try this weekend. I appreciate the help.
  5. I do have nested datasets. Would that cause the problem? root@Phoebe:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT cache_nvme 338G 6.98T 104K /mnt/cache_nvme cache_nvme/domains 338G 6.98T 112K /mnt/cache_nvme/domains cache_nvme/domains@2023-09-20-160630 8K - 112K - cache_nvme/domains/Abbay 138G 6.98T 100G /mnt/cache_nvme/domains/Abbay cache_nvme/domains/Abbay@2023-09-20-160133 2.59G - 100G - cache_nvme/domains/Abbay@2023-09-20-170610 2.13G - 100G - cache_nvme/domains/Abbay@2023-09-20-222020 3.80G - 100G - cache_nvme/domains/Abbay@2023-09-24-193824 7.74G - 100G - cache_nvme/domains/Windows 10 Pro 100G 6.98T 100G /mnt/cache_nvme/domains/Windows 10 Pro cache_nvme/domains/Windows 10 Pro@2023-09-20-160535 0B - 100G - cache_nvme/domains/Windows 11 Pro 100G 6.98T 100G /mnt/cache_nvme/domains/Windows 11 Pro cache_nvme/domains/Windows 11 Pro@2023-09-20-160541 0B - 100G - cache_nvme_vm 6.08M 3.60T 96K /mnt/cache_nvme_vm cache_ssd 2.75T 4.37T 120K /mnt/cache_ssd cache_ssd/appdata 214M 4.37T 112K /mnt/cache_ssd/appdata cache_ssd/appdata@2023-09-20-160600 0B - 112K - cache_ssd/appdata/DiskSpeed 8.56M 4.37T 8.56M /mnt/cache_ssd/appdata/DiskSpeed cache_ssd/appdata/DiskSpeed@2023-09-20-160600 0B - 8.56M - cache_ssd/appdata/firefox 205M 4.37T 205M /mnt/cache_ssd/appdata/firefox cache_ssd/appdata/firefox@2023-09-20-160600 0B - 205M - cache_ssd/asteriabackup 2.74T 4.37T 2.74T /mnt/cache_ssd/asteriabackup cache_ssd/system 6.16G 4.37T 2.66G /mnt/cache_ssd/system cache_ssd/system@2023-09-20-160606 3.50G - 5.75G - disk1 85.9G 16.2T 112K /mnt/disk1 disk1/backup 70.2G 16.2T 70.2G /mnt/disk1/backup disk1/domains 96K 16.2T 96K /mnt/disk1/domains disk1/isos 15.7G 16.2T 15.7G /mnt/disk1/isos disk2 11.6M 16.2T 96K /mnt/disk2 disk3 7.71T 8.53T 7.71T /mnt/disk3 disk4 13.5T 2.77T 96K /mnt/disk4 disk4/TV Shows 13.5T 2.77T 13.5T /mnt/disk4/TV Shows disk5 9.70T 6.53T 104K /mnt/disk5 disk5/TV Shows 9.70T 6.53T 9.70T /mnt/disk5/TV Shows
  6. Were you able to replicate the problem?
  7. Sorry, forgot to attach them. Here you go. The only thing I've changed since I've been able to successfully move my domains folder with the mover is the format of the vm cache disk from btrfs to zfs. Could it be something with zfs? Thanks. phoebe-diagnostics-20230929-1036.zip
  8. I'm trying to move my domains share from one cache pool to another (via the array) using the mover but keep getting a "Device or resource busy" error. I've disabled VM and Docker services in Settings and tried booting in safe mode too to eliminate the possibility of any plugins accessing the VMs but still says "Device or resource busy". I've successfully moved domains, appdata and system shares in the past following this method. Is there any way to see what process is using the file?
  9. Thanks for the fantastic plugin Iker, I really appreciate it! I am having an issue with one of my pools not showing up on the Main page under ZFS Master despite it being listed after running zpool -list. ZFS Master will show all my pools fine after a reboot, but after the server has been running for awhile it will stop displaying one of the pools on the Main page. It's not always the same pool that stops displaying on Main but it is always just one that stops showing up, i.e. I've never seen two not display. Right now it is disk2 that is not displaying properly: EDIT: Forgot to mention, I'm on 6.12.4 and 2023.09.05.31.main
  10. I have 7x WUH721818AL5204 18TB SAS Drives on an LSI-3008 that also spin down fine. The activity light on the front of the trays do flash every second but the drives stay spun down. Anyone else experience this flashing every second? Mine are in a supermicro 826 chassis with the EL1 backplane.
  11. lazant

    New Build

    Hi, looking for some advice on hardware for a new build to replace my aging Atlas server. I'd like to keep it under $5k. Here's what I'm wanting to run: Plex server with a 40TB library either on an Ubuntu VM or in a Docker 1-2 Windows VMs that I can connect to remotely to do work in Quickbooks and possibly Rhino if the latency isn't too bad Nextcloud Docker for file storage I'm thinking 2U is the form factor I'd like since I'm almost out of room in my rack, however, I also want to make it somewhat quiet. Doesn't need to be silent but I don't want it to sound like a jet engine. I already have an X10-DRI-T and 2x E5-2660 v3's laying around. Here's what I'm thinking for hardware: Supermicro CSE-826BE1C-R920LPB 2U chassis with either the BPN-SAS3-826EL1 or BPN-SAS3-826EL1-N4 and the optional rear drive kit for 2x SSDs. 4x Samsung 32GB 2Rx4 DDR4 PC4-2133P Registered ECC Memory Supermicro AOC-S3008L-L8E SAS3 12Gbps 8-Port Internal PCI-e 3.0 HBA Controller 5x 14TB WD Red drives to start Questions: Is it worth getting the backplane with the NVME capability for VM's, docker containers or cache drives? If so, what is the best way to connect to MB to get the most of the speed and reduce bottlenecks? Any recommendations for making it quieter? I've seen several videos of people replacing the mid fans with quieter Noctua's which is what I'm thinking I'll do. Should I also go with liquid cooling for the CPU's or just an active or passive heatsink? Would it be worth getting a low profile graphics card for Plex transcoding? Thanks in advance for any advice or just confirmation that these parts work.
  12. Well I just finished the migration to xfs without any issues! I ended up adding all of my unconverted disks to the global exclude list and then removed them one by one after each cycle through the instructions on the wiki between steps 17 and 18. The files on the excluded disks were unavailable for a short time but I didn't need to worry about accidentally changing the source disk during the rsync copy in step 8. Thanks to everyone in this community for their help! I'm truly grateful.
  13. Cheers. That cleared it up. It seems then that as I un-exclude the disks after conversion to xfs, the files will become available again in the share. I'm about halfway through a 3TB copy right now so I'll confirm tomorrow afternoon. Edit: Confirmed. Un-excluding the next disk converted to xfs did make the files on the disk available again.
  14. I looked in Finder on my mac and there are a bunch of folders missing. However, when I tried browsing the the ‘Disk Shares’ from the excluded disks using UnRAID’s browser interface I can see the missing files. Phew! Anything special I need to do? Or just reenable the disks when I’m done with all the fs conversion?
  15. So when I uncheck them from being excluded will all the files on those disks show up in the share again?
  16. Thanks for clarifying. So I'm totally panicked right now. Just opened Plex and noticed a ton of my files are missing. Is this because I switched a bunch of the disks to excluded? I assumed excluded just prevented new files from being written to those disks. Does it also not allow them to be read? Will they show up again when I 'un-exclude' them?
  17. I've been proceeding with the conversion and decided to just add all the remaining rfs drives to the excluded list. This seems to be working well and I don't need to worry about the disks being written to during the rsync copy to the swap disk. One other thing I am curious about is Step 17. Is there a good and QUICK way to check that everything is fine? Should I just pick a few random files and compare checksums?
  18. I'm in the process of a massive upgrade to my Atlas clone. I've already upgraded from 5.0.5 to 6.7.0 and replaced a couple failing drives in the nick of time (I've been really lucky). Now I'm in the process of converting all my disks from rfs to xfs. I've done 2 of 10 so far using the mirroring method described here. As with the original Atlas, I have UnRAID running as a guest on an esxi 5.1.0 host along with an ubuntu guest that runs plex. This ubuntu guest along with a local printer (for scans) and an IP cam are the only users that have access to the UnRAID shares. When I did the first 2 disks I shut down the ubuntu guest, turned off the printer and unplugged the cam to make sure nothing was writing to the array during the rsync operation. Obviously this means I can't use plex during this process. My question is, rather than only adding the swap disk to the excluded disk(s) list under 'global share settings' as instructed to do in the wiki, could I also add the source disk and be assured that nothing is written to the source disk during the rsync operation? I have no VMs or dockers running on UnRAID (yet).
  19. I’m curious why we don’t add BOTH disks to the excluded list to ensure nothing is written to them during the copy? The wiki says to just add the swap disk (disk 11 in the example).
  20. I'm currently at step 15 of the mirroring method process for converting the file system of my first drive (disk10). I started with 10 disks formatted using RFS, added a disk11 and formatted it using XFS and used rsync to copy all data from disk 10 to disk 11. I've swapped the drive assignments of disk10 and disk11 (swap) and clicked on the disk names to swap the file system formats, however, both say "auto" for file system. I just want to make sure I don't mess this up. Should I set disk10 to xfs and disk11 to rfs? Since the other 9 disks are all set to auto, do I need to go through and set them to rfs as described at the end of step 11? Thanks.
  21. Ok so I finally finished preclearing new drives and I’m ready to proceed. I have a 6TB parity 1 drive, 3 TB parity 2 drive, and 10 x 3 TB data drives. Drive 10 is failing and I want to replace it with a 6TB drive and then proceed to converting all the drives to XFS using the “mirroring” method, which requires there to be only one parity drive. I just unassigned the 2nd parity disk, powered down, removed the disk, powered on, tried to assign a precleared 6TB drive to disk 10 and it said invalid configuration and that parity 2 was missing. Is there something else I need to do to have unraid forget about parity 2?
  22. Yea I'll add the 2nd parity after I finish all the mirroring. Thanks for the help.
  23. According to the wiki, the mirroring procedure will break the validity of the 2nd parity disk. So I thought I would just remove it before beginning to avoid any confusion.