lazant

Members
  • Posts

    41
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

lazant's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. Thanks for posting this, I'd love to get this running too. Going to try this weekend. Any tips after running it for awhile? Still "botchy"?
  2. Thanks Jorge, all I needed to do was delete the dataset with the VMs on the old pool and I was good to go off the new pool. I'm curious how unraid handles there being duplicate file names on two different drives across one share? When I sent the VM dataset to the new drive using zfs send it looked like this: cache_nvme/domains/Abbay cache_nvme_vm/domains/Abbay
  3. I'm still new to zfs so this took me awhile to read up on and get the confidence to do but I've moved the nested datasets with the VMs from my old cache to my new one using: zfs send -wR cache_nvme/domains/Abbay@2023-09-24-193824 | zfs receive -Fdu cache_nvme_vm zfs send -wR cache_nvme/domains/Windows\ 10\ Pro@2023-10-20-102957 | zfs receive -Fdu cache_nvme_vm zfs send -wR cache_nvme/domains/Windows\ 11\ Pro@2023-10-20-103005 | zfs receive -Fdu cache_nvme_vm Here is my new dataset structure: NAME USED AVAIL REFER MOUNTPOINT cache_nvme 345G 6.98T 104K /mnt/cache_nvme cache_nvme/domains 345G 6.98T 112K /mnt/cache_nvme/domains cache_nvme/domains@2023-09-20-160630 64K - 112K - cache_nvme/domains/Abbay 145G 6.98T 100G /mnt/cache_nvme/domains/Abbay cache_nvme/domains/Abbay@2023-09-20-160133 2.59G - 100G - cache_nvme/domains/Abbay@2023-09-20-170610 2.13G - 100G - cache_nvme/domains/Abbay@2023-09-20-222020 3.80G - 100G - cache_nvme/domains/Abbay@2023-09-24-193824 8.49G - 100G - cache_nvme/domains/Abbay@2023-10-19-214718 2.50G - 100G - cache_nvme/domains/Windows 10 Pro 100G 6.98T 100G /mnt/cache_nvme/domains/Windows 10 Pro cache_nvme/domains/Windows 10 Pro@2023-09-20-160535 56K - 100G - cache_nvme/domains/Windows 10 Pro@2023-10-20-102957 0B - 100G - cache_nvme/domains/Windows 11 Pro 100G 6.98T 100G /mnt/cache_nvme/domains/Windows 11 Pro cache_nvme/domains/Windows 11 Pro@2023-09-20-160541 56K - 100G - cache_nvme/domains/Windows 11 Pro@2023-10-20-103005 0B - 100G - cache_nvme_vm 342G 3.27T 104K /mnt/cache_nvme_vm cache_nvme_vm/domains 342G 3.27T 96K /mnt/cache_nvme_vm/domains cache_nvme_vm/domains/Abbay 142G 3.27T 100G /mnt/cache_nvme_vm/domains/Abbay cache_nvme_vm/domains/Abbay@2023-09-20-160133 2.59G - 100G - cache_nvme_vm/domains/Abbay@2023-09-20-170610 2.13G - 100G - cache_nvme_vm/domains/Abbay@2023-09-20-222020 3.80G - 100G - cache_nvme_vm/domains/Abbay@2023-09-24-193824 8.49G - 100G - cache_nvme_vm/domains/Abbay@2023-10-19-214718 0B - 100G - cache_nvme_vm/domains/Windows 10 Pro 100G 3.27T 100G /mnt/cache_nvme_vm/domains/Windows 10 Pro cache_nvme_vm/domains/Windows 10 Pro@2023-09-20-160535 56K - 100G - cache_nvme_vm/domains/Windows 10 Pro@2023-10-20-102957 0B - 100G - cache_nvme_vm/domains/Windows 11 Pro 100G 3.27T 100G /mnt/cache_nvme_vm/domains/Windows 11 Pro cache_nvme_vm/domains/Windows 11 Pro@2023-09-20-160541 56K - 100G - cache_nvme_vm/domains/Windows 11 Pro@2023-10-20-103005 0B - 100G - cache_ssd 6.49G 7.12T 120K /mnt/cache_ssd cache_ssd/appdata 214M 7.12T 112K /mnt/cache_ssd/appdata cache_ssd/appdata@2023-09-20-160600 0B - 112K - cache_ssd/appdata/DiskSpeed 8.56M 7.12T 8.56M /mnt/cache_ssd/appdata/DiskSpeed cache_ssd/appdata/DiskSpeed@2023-09-20-160600 0B - 8.56M - cache_ssd/appdata/firefox 205M 7.12T 205M /mnt/cache_ssd/appdata/firefox cache_ssd/appdata/firefox@2023-09-20-160600 0B - 205M - cache_ssd/asteriabackup 436K 7.12T 436K /mnt/cache_ssd/asteriabackup cache_ssd/system 6.15G 7.12T 2.65G /mnt/cache_ssd/system cache_ssd/system@2023-09-20-160606 3.50G - 5.75G - disk1 4.37T 11.9T 112K /mnt/disk1 disk1/asteriabackup 4.28T 11.9T 4.28T /mnt/disk1/asteriabackup disk1/backup 70.2G 11.9T 70.2G /mnt/disk1/backup disk1/isos 15.7G 11.9T 15.7G /mnt/disk1/isos disk2 13.9M 16.2T 96K /mnt/disk2 disk3 7.71T 8.53T 7.71T /mnt/disk3 disk4 13.5T 2.77T 96K /mnt/disk4 disk4/TV Shows 13.5T 2.77T 13.5T /mnt/disk4/TV Shows disk5 9.70T 6.53T 104K /mnt/disk5 disk5/TV Shows 9.70T 6.53T 9.70T /mnt/disk5/TV Shows My question now is how to go about switching the domains share from cache_nvme/domains to cache_nvme_vm/domains? Thanks again for your help.
  4. Thanks Jorge, I'm away right now but will try this weekend. I appreciate the help.
  5. I do have nested datasets. Would that cause the problem? root@Phoebe:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT cache_nvme 338G 6.98T 104K /mnt/cache_nvme cache_nvme/domains 338G 6.98T 112K /mnt/cache_nvme/domains cache_nvme/domains@2023-09-20-160630 8K - 112K - cache_nvme/domains/Abbay 138G 6.98T 100G /mnt/cache_nvme/domains/Abbay cache_nvme/domains/Abbay@2023-09-20-160133 2.59G - 100G - cache_nvme/domains/Abbay@2023-09-20-170610 2.13G - 100G - cache_nvme/domains/Abbay@2023-09-20-222020 3.80G - 100G - cache_nvme/domains/Abbay@2023-09-24-193824 7.74G - 100G - cache_nvme/domains/Windows 10 Pro 100G 6.98T 100G /mnt/cache_nvme/domains/Windows 10 Pro cache_nvme/domains/Windows 10 Pro@2023-09-20-160535 0B - 100G - cache_nvme/domains/Windows 11 Pro 100G 6.98T 100G /mnt/cache_nvme/domains/Windows 11 Pro cache_nvme/domains/Windows 11 Pro@2023-09-20-160541 0B - 100G - cache_nvme_vm 6.08M 3.60T 96K /mnt/cache_nvme_vm cache_ssd 2.75T 4.37T 120K /mnt/cache_ssd cache_ssd/appdata 214M 4.37T 112K /mnt/cache_ssd/appdata cache_ssd/appdata@2023-09-20-160600 0B - 112K - cache_ssd/appdata/DiskSpeed 8.56M 4.37T 8.56M /mnt/cache_ssd/appdata/DiskSpeed cache_ssd/appdata/DiskSpeed@2023-09-20-160600 0B - 8.56M - cache_ssd/appdata/firefox 205M 4.37T 205M /mnt/cache_ssd/appdata/firefox cache_ssd/appdata/firefox@2023-09-20-160600 0B - 205M - cache_ssd/asteriabackup 2.74T 4.37T 2.74T /mnt/cache_ssd/asteriabackup cache_ssd/system 6.16G 4.37T 2.66G /mnt/cache_ssd/system cache_ssd/system@2023-09-20-160606 3.50G - 5.75G - disk1 85.9G 16.2T 112K /mnt/disk1 disk1/backup 70.2G 16.2T 70.2G /mnt/disk1/backup disk1/domains 96K 16.2T 96K /mnt/disk1/domains disk1/isos 15.7G 16.2T 15.7G /mnt/disk1/isos disk2 11.6M 16.2T 96K /mnt/disk2 disk3 7.71T 8.53T 7.71T /mnt/disk3 disk4 13.5T 2.77T 96K /mnt/disk4 disk4/TV Shows 13.5T 2.77T 13.5T /mnt/disk4/TV Shows disk5 9.70T 6.53T 104K /mnt/disk5 disk5/TV Shows 9.70T 6.53T 9.70T /mnt/disk5/TV Shows
  6. Were you able to replicate the problem?
  7. Sorry, forgot to attach them. Here you go. The only thing I've changed since I've been able to successfully move my domains folder with the mover is the format of the vm cache disk from btrfs to zfs. Could it be something with zfs? Thanks. phoebe-diagnostics-20230929-1036.zip
  8. I'm trying to move my domains share from one cache pool to another (via the array) using the mover but keep getting a "Device or resource busy" error. I've disabled VM and Docker services in Settings and tried booting in safe mode too to eliminate the possibility of any plugins accessing the VMs but still says "Device or resource busy". I've successfully moved domains, appdata and system shares in the past following this method. Is there any way to see what process is using the file?
  9. Thanks for the fantastic plugin Iker, I really appreciate it! I am having an issue with one of my pools not showing up on the Main page under ZFS Master despite it being listed after running zpool -list. ZFS Master will show all my pools fine after a reboot, but after the server has been running for awhile it will stop displaying one of the pools on the Main page. It's not always the same pool that stops displaying on Main but it is always just one that stops showing up, i.e. I've never seen two not display. Right now it is disk2 that is not displaying properly: EDIT: Forgot to mention, I'm on 6.12.4 and 2023.09.05.31.main
  10. I have 7x WUH721818AL5204 18TB SAS Drives on an LSI-3008 that also spin down fine. The activity light on the front of the trays do flash every second but the drives stay spun down. Anyone else experience this flashing every second? Mine are in a supermicro 826 chassis with the EL1 backplane.
  11. lazant

    New Build

    Hi, looking for some advice on hardware for a new build to replace my aging Atlas server. I'd like to keep it under $5k. Here's what I'm wanting to run: Plex server with a 40TB library either on an Ubuntu VM or in a Docker 1-2 Windows VMs that I can connect to remotely to do work in Quickbooks and possibly Rhino if the latency isn't too bad Nextcloud Docker for file storage I'm thinking 2U is the form factor I'd like since I'm almost out of room in my rack, however, I also want to make it somewhat quiet. Doesn't need to be silent but I don't want it to sound like a jet engine. I already have an X10-DRI-T and 2x E5-2660 v3's laying around. Here's what I'm thinking for hardware: Supermicro CSE-826BE1C-R920LPB 2U chassis with either the BPN-SAS3-826EL1 or BPN-SAS3-826EL1-N4 and the optional rear drive kit for 2x SSDs. 4x Samsung 32GB 2Rx4 DDR4 PC4-2133P Registered ECC Memory Supermicro AOC-S3008L-L8E SAS3 12Gbps 8-Port Internal PCI-e 3.0 HBA Controller 5x 14TB WD Red drives to start Questions: Is it worth getting the backplane with the NVME capability for VM's, docker containers or cache drives? If so, what is the best way to connect to MB to get the most of the speed and reduce bottlenecks? Any recommendations for making it quieter? I've seen several videos of people replacing the mid fans with quieter Noctua's which is what I'm thinking I'll do. Should I also go with liquid cooling for the CPU's or just an active or passive heatsink? Would it be worth getting a low profile graphics card for Plex transcoding? Thanks in advance for any advice or just confirmation that these parts work.
  12. Well I just finished the migration to xfs without any issues! I ended up adding all of my unconverted disks to the global exclude list and then removed them one by one after each cycle through the instructions on the wiki between steps 17 and 18. The files on the excluded disks were unavailable for a short time but I didn't need to worry about accidentally changing the source disk during the rsync copy in step 8. Thanks to everyone in this community for their help! I'm truly grateful.
  13. Cheers. That cleared it up. It seems then that as I un-exclude the disks after conversion to xfs, the files will become available again in the share. I'm about halfway through a 3TB copy right now so I'll confirm tomorrow afternoon. Edit: Confirmed. Un-excluding the next disk converted to xfs did make the files on the disk available again.
  14. I looked in Finder on my mac and there are a bunch of folders missing. However, when I tried browsing the the ‘Disk Shares’ from the excluded disks using UnRAID’s browser interface I can see the missing files. Phew! Anything special I need to do? Or just reenable the disks when I’m done with all the fs conversion?