lazant

Members
  • Posts

    41
  • Joined

  • Last visited

Posts posted by lazant

  1. Thanks Jorge, all I needed to do was delete the dataset with the VMs on the old pool and I was good to go off the new pool.

     

    I'm curious how unraid handles there being duplicate file names on two different drives across one share? When I sent the VM dataset to the new drive using zfs send it looked like this:

    cache_nvme/domains/Abbay
    cache_nvme_vm/domains/Abbay
  2. I'm still new to zfs so this took me awhile to read up on and get the confidence to do but I've moved the nested datasets with the VMs from my old cache to my new one using:

     

    zfs send -wR cache_nvme/domains/Abbay@2023-09-24-193824 | zfs receive -Fdu cache_nvme_vm
    zfs send -wR cache_nvme/domains/Windows\ 10\ Pro@2023-10-20-102957 | zfs receive -Fdu cache_nvme_vm
    zfs send -wR cache_nvme/domains/Windows\ 11\ Pro@2023-10-20-103005 | zfs receive -Fdu cache_nvme_vm

     

    Here is my new dataset structure:

     

    NAME                                                     USED  AVAIL     REFER  MOUNTPOINT
    cache_nvme                                               345G  6.98T      104K  /mnt/cache_nvme
    cache_nvme/domains                                       345G  6.98T      112K  /mnt/cache_nvme/domains
    cache_nvme/domains@2023-09-20-160630                      64K      -      112K  -
    cache_nvme/domains/Abbay                                 145G  6.98T      100G  /mnt/cache_nvme/domains/Abbay
    cache_nvme/domains/Abbay@2023-09-20-160133              2.59G      -      100G  -
    cache_nvme/domains/Abbay@2023-09-20-170610              2.13G      -      100G  -
    cache_nvme/domains/Abbay@2023-09-20-222020              3.80G      -      100G  -
    cache_nvme/domains/Abbay@2023-09-24-193824              8.49G      -      100G  -
    cache_nvme/domains/Abbay@2023-10-19-214718              2.50G      -      100G  -
    cache_nvme/domains/Windows 10 Pro                        100G  6.98T      100G  /mnt/cache_nvme/domains/Windows 10 Pro
    cache_nvme/domains/Windows 10 Pro@2023-09-20-160535       56K      -      100G  -
    cache_nvme/domains/Windows 10 Pro@2023-10-20-102957        0B      -      100G  -
    cache_nvme/domains/Windows 11 Pro                        100G  6.98T      100G  /mnt/cache_nvme/domains/Windows 11 Pro
    cache_nvme/domains/Windows 11 Pro@2023-09-20-160541       56K      -      100G  -
    cache_nvme/domains/Windows 11 Pro@2023-10-20-103005        0B      -      100G  -
    cache_nvme_vm                                            342G  3.27T      104K  /mnt/cache_nvme_vm
    cache_nvme_vm/domains                                    342G  3.27T       96K  /mnt/cache_nvme_vm/domains
    cache_nvme_vm/domains/Abbay                              142G  3.27T      100G  /mnt/cache_nvme_vm/domains/Abbay
    cache_nvme_vm/domains/Abbay@2023-09-20-160133           2.59G      -      100G  -
    cache_nvme_vm/domains/Abbay@2023-09-20-170610           2.13G      -      100G  -
    cache_nvme_vm/domains/Abbay@2023-09-20-222020           3.80G      -      100G  -
    cache_nvme_vm/domains/Abbay@2023-09-24-193824           8.49G      -      100G  -
    cache_nvme_vm/domains/Abbay@2023-10-19-214718              0B      -      100G  -
    cache_nvme_vm/domains/Windows 10 Pro                     100G  3.27T      100G  /mnt/cache_nvme_vm/domains/Windows 10 Pro
    cache_nvme_vm/domains/Windows 10 Pro@2023-09-20-160535    56K      -      100G  -
    cache_nvme_vm/domains/Windows 10 Pro@2023-10-20-102957     0B      -      100G  -
    cache_nvme_vm/domains/Windows 11 Pro                     100G  3.27T      100G  /mnt/cache_nvme_vm/domains/Windows 11 Pro
    cache_nvme_vm/domains/Windows 11 Pro@2023-09-20-160541    56K      -      100G  -
    cache_nvme_vm/domains/Windows 11 Pro@2023-10-20-103005     0B      -      100G  -
    cache_ssd                                               6.49G  7.12T      120K  /mnt/cache_ssd
    cache_ssd/appdata                                        214M  7.12T      112K  /mnt/cache_ssd/appdata
    cache_ssd/appdata@2023-09-20-160600                        0B      -      112K  -
    cache_ssd/appdata/DiskSpeed                             8.56M  7.12T     8.56M  /mnt/cache_ssd/appdata/DiskSpeed
    cache_ssd/appdata/DiskSpeed@2023-09-20-160600              0B      -     8.56M  -
    cache_ssd/appdata/firefox                                205M  7.12T      205M  /mnt/cache_ssd/appdata/firefox
    cache_ssd/appdata/firefox@2023-09-20-160600                0B      -      205M  -
    cache_ssd/asteriabackup                                  436K  7.12T      436K  /mnt/cache_ssd/asteriabackup
    cache_ssd/system                                        6.15G  7.12T     2.65G  /mnt/cache_ssd/system
    cache_ssd/system@2023-09-20-160606                      3.50G      -     5.75G  -
    disk1                                                   4.37T  11.9T      112K  /mnt/disk1
    disk1/asteriabackup                                     4.28T  11.9T     4.28T  /mnt/disk1/asteriabackup
    disk1/backup                                            70.2G  11.9T     70.2G  /mnt/disk1/backup
    disk1/isos                                              15.7G  11.9T     15.7G  /mnt/disk1/isos
    disk2                                                   13.9M  16.2T       96K  /mnt/disk2
    disk3                                                   7.71T  8.53T     7.71T  /mnt/disk3
    disk4                                                   13.5T  2.77T       96K  /mnt/disk4
    disk4/TV Shows                                          13.5T  2.77T     13.5T  /mnt/disk4/TV Shows
    disk5                                                   9.70T  6.53T      104K  /mnt/disk5
    disk5/TV Shows                                          9.70T  6.53T     9.70T  /mnt/disk5/TV Shows

     

    My question now is how to go about switching the domains share from cache_nvme/domains to cache_nvme_vm/domains?

     

    Thanks again for your help.

  3. I do have nested datasets. Would that cause the problem?

     

    root@Phoebe:~# zfs list -t all
    NAME                                                  USED  AVAIL     REFER  MOUNTPOINT
    cache_nvme                                            338G  6.98T      104K  /mnt/cache_nvme
    cache_nvme/domains                                    338G  6.98T      112K  /mnt/cache_nvme/domains
    cache_nvme/domains@2023-09-20-160630                    8K      -      112K  -
    cache_nvme/domains/Abbay                              138G  6.98T      100G  /mnt/cache_nvme/domains/Abbay
    cache_nvme/domains/Abbay@2023-09-20-160133           2.59G      -      100G  -
    cache_nvme/domains/Abbay@2023-09-20-170610           2.13G      -      100G  -
    cache_nvme/domains/Abbay@2023-09-20-222020           3.80G      -      100G  -
    cache_nvme/domains/Abbay@2023-09-24-193824           7.74G      -      100G  -
    cache_nvme/domains/Windows 10 Pro                     100G  6.98T      100G  /mnt/cache_nvme/domains/Windows 10 Pro
    cache_nvme/domains/Windows 10 Pro@2023-09-20-160535     0B      -      100G  -
    cache_nvme/domains/Windows 11 Pro                     100G  6.98T      100G  /mnt/cache_nvme/domains/Windows 11 Pro
    cache_nvme/domains/Windows 11 Pro@2023-09-20-160541     0B      -      100G  -
    cache_nvme_vm                                        6.08M  3.60T       96K  /mnt/cache_nvme_vm
    cache_ssd                                            2.75T  4.37T      120K  /mnt/cache_ssd
    cache_ssd/appdata                                     214M  4.37T      112K  /mnt/cache_ssd/appdata
    cache_ssd/appdata@2023-09-20-160600                     0B      -      112K  -
    cache_ssd/appdata/DiskSpeed                          8.56M  4.37T     8.56M  /mnt/cache_ssd/appdata/DiskSpeed
    cache_ssd/appdata/DiskSpeed@2023-09-20-160600           0B      -     8.56M  -
    cache_ssd/appdata/firefox                             205M  4.37T      205M  /mnt/cache_ssd/appdata/firefox
    cache_ssd/appdata/firefox@2023-09-20-160600             0B      -      205M  -
    cache_ssd/asteriabackup                              2.74T  4.37T     2.74T  /mnt/cache_ssd/asteriabackup
    cache_ssd/system                                     6.16G  4.37T     2.66G  /mnt/cache_ssd/system
    cache_ssd/system@2023-09-20-160606                   3.50G      -     5.75G  -
    disk1                                                85.9G  16.2T      112K  /mnt/disk1
    disk1/backup                                         70.2G  16.2T     70.2G  /mnt/disk1/backup
    disk1/domains                                          96K  16.2T       96K  /mnt/disk1/domains
    disk1/isos                                           15.7G  16.2T     15.7G  /mnt/disk1/isos
    disk2                                                11.6M  16.2T       96K  /mnt/disk2
    disk3                                                7.71T  8.53T     7.71T  /mnt/disk3
    disk4                                                13.5T  2.77T       96K  /mnt/disk4
    disk4/TV Shows                                       13.5T  2.77T     13.5T  /mnt/disk4/TV Shows
    disk5                                                9.70T  6.53T      104K  /mnt/disk5
    disk5/TV Shows                                       9.70T  6.53T     9.70T  /mnt/disk5/TV Shows

     

  4. Were you able to replicate the problem?

     

    Quote

    root@Phoebe:~# fuser -v /mnt/cache_nvme/domains/Abbay
                         USER        PID ACCESS COMMAND
    /mnt/cache_nvme/domains/Abbay:
                         root     kernel mount /mnt/cache_nvme/domains/Abbay

     

  5. I'm trying to move my domains share from one cache pool to another (via the array) using the mover but keep getting a "Device or resource busy" error. I've disabled VM and Docker services in Settings and tried booting in safe mode too to eliminate the possibility of any plugins accessing the VMs but still says "Device or resource busy". I've successfully moved domains, appdata and system shares in the past following this method.

     

    Is there any way to see what process is using the file?

     

    Quote

    Sep 28 17:17:32 Phoebe emhttpd: shcmd (132): /usr/local/sbin/mover |& logger -t move &
    Sep 28 17:17:32 Phoebe move: mover: started
    Sep 28 17:17:32 Phoebe move: file: /mnt/cache_nvme/domains/Windows 10 Pro/vdisk1.img
    Sep 28 17:17:32 Phoebe move: create_parent: /mnt/cache_nvme/domains/Windows 10 Pro error: Device or resource busy
    Sep 28 17:17:32 Phoebe move: move_object: /mnt/cache_nvme/domains/Windows 10 Pro: Device or resource busy
    Sep 28 17:17:32 Phoebe move: file: /mnt/cache_nvme/domains/Abbay/vdisk1.img
    Sep 28 17:17:32 Phoebe move: create_parent: /mnt/cache_nvme/domains/Abbay error: Device or resource busy
    Sep 28 17:17:32 Phoebe move: move_object: /mnt/cache_nvme/domains/Abbay: Device or resource busy
    Sep 28 17:17:32 Phoebe move: file: /mnt/cache_nvme/domains/Windows 11 Pro/vdisk1.img
    Sep 28 17:17:32 Phoebe move: create_parent: /mnt/cache_nvme/domains/Windows 11 Pro error: Device or resource busy
    Sep 28 17:17:32 Phoebe move: move_object: /mnt/cache_nvme/domains/Windows 11 Pro: Device or resource busy
    Sep 28 17:17:33 Phoebe move: mover: finished

     

  6. 4 minutes ago, Iker said:

    Well, enjoy, my friends, because a new update is live with the so-long-awaited functionality; the changelog is the following:

     

    2023.09.27

    • Change - "No refresh" option now doesn't load information on page refresh
    • Fix - Dynamic Config reload

    The "Dynamic Config reload" means you don't have to close the window for the config to apply correctly.


    How can I buy you a beer?!

  7. Thanks for the fantastic plugin Iker, I really appreciate it!

     

    I am having an issue with one of my pools not showing up on the Main page under ZFS Master despite it being listed after running zpool -list. ZFS Master will show all my pools fine after a reboot, but after the server has been running for awhile it will stop displaying one of the pools on the Main page. It's not always the same pool that stops displaying on Main but it is always just one that stops showing up, i.e. I've never seen two not display. Right now it is disk2 that is not displaying properly:

     

    1113765347_Screenshot2023-09-21at1_27_29PM.thumb.png.a13f6c44f2a19e0c03aa920d9413f798.png

     

    1575292474_Screenshot2023-09-21at1_28_32PM.thumb.png.60a1df1506e28af9fdf0683357f71892.png

     

    EDIT: Forgot to mention, I'm on 6.12.4 and 2023.09.05.31.main

  8. On 4/24/2023 at 4:26 PM, doron said:

    Oddly, I have never compiled a list of drives that do perform nicely with the spin down/up SCSI/SAS commands. I have collected a list of exclusions - i.e. drives (or drive/controller combos) that misbehave, or otherwise known to either ignore the spin down command or create some sort of breakage upon receiving them.

    It might be a good idea to compile success stories into such a list.

     

    I'll kick it off: I have a few HUH721212AL4200 (12TB HGST) on an on-board supermicro SAS controller (LSI 2308). They spin down and up rather perfectly.

    I have 7x WUH721818AL5204 18TB SAS Drives on an LSI-3008 that also spin down fine. The activity light on the front of the trays do flash every second but the drives stay spun down. Anyone else experience this flashing every second? Mine are in a supermicro 826 chassis with the EL1 backplane.

    • Like 1
  9. Hi, looking for some advice on hardware for a new build to replace my aging Atlas server. I'd like to keep it under $5k. Here's what I'm wanting to run:

    • Plex server with a 40TB library either on an Ubuntu VM or in a Docker
    • 1-2 Windows VMs that I can connect to remotely to do work in Quickbooks and possibly Rhino if the latency isn't too bad
    • Nextcloud Docker for file storage

    I'm thinking 2U is the form factor I'd like since I'm almost out of room in my rack, however, I also want to make it somewhat quiet. Doesn't need to be silent but I don't want it to sound like a jet engine. I already have an X10-DRI-T and 2x E5-2660 v3's laying around. Here's what I'm thinking for hardware:

     

    Questions:

    • Is it worth getting the backplane with the NVME capability for VM's, docker containers or cache drives?
      • If so, what is the best way to connect to MB to get the most of the speed and reduce bottlenecks?
    • Any recommendations for making it quieter? I've seen several videos of people replacing the mid fans with quieter Noctua's which is what I'm thinking I'll do. Should I also go with liquid cooling for the CPU's or just an active or passive heatsink?
    • Would it be worth getting a low profile graphics card for Plex transcoding?

     

    Thanks in advance for any advice or just confirmation that these parts work.

  10. Well I just finished the migration to xfs without any issues! I ended up adding all of my unconverted disks to the global exclude list and then removed them one by one after each cycle through the instructions on the wiki between steps 17 and 18. The files on the excluded disks were unavailable for a short time but I didn't need to worry about accidentally changing the source disk during the rsync copy in step 8. 

     

    Thanks to everyone in this community for their help! I'm truly grateful.

  11. 13 hours ago, Frank1940 said:

    EDIT: you might want to read through this thread:

     

     

    Cheers. That cleared it up. It seems then that as I un-exclude the disks after conversion to xfs, the files will become available again in the share. I'm about halfway through a 3TB copy right now so I'll confirm tomorrow afternoon.

     

    Edit:

    Confirmed. Un-excluding the next disk converted to xfs did make the files on the disk available again.

  12. 44 minutes ago, Frank1940 said:

    Open up the shares in Windows using File Explorer and see if the 'missing' files are there.  It is my understanding that excluding a share from a disk only prevents writing new files to that disk.  Existing files can still be read.  What could be happening is that Plex is attempting to access those files in a read-write mode and that might not be allowed.

    I looked in Finder on my mac and there are a bunch of folders missing. However, when I tried browsing the the ‘Disk Shares’ from the excluded disks using UnRAID’s browser interface I can see the missing files. Phew!

     

    Anything special I need to do? Or just reenable the disks when I’m done with all the fs conversion?

  13. 4 hours ago, Frank1940 said:

    I think what step 17 is referring to double check/verify that you have requested the format changes to the proper disks on the setup screen BEFORE you click 'Apply'.  Because if you didn't, you will lose all of the data on an 'innocent' drive as soon as it starts the format procedure.  As I recall, this is a point at which it is actually quite easy to make a mistake.  That was why I had a table prepared with what the settings were to be on each step and I checked-off each step as I did it! 

    Thanks for clarifying.

     

    So I'm totally panicked right now. Just opened Plex and noticed a ton of my files are missing. Is this because I switched a bunch of the disks to excluded? I assumed excluded just prevented new files from being written to those disks. Does it also not allow them to be read? Will they show up again when I 'un-exclude' them?

  14. I've been proceeding with the conversion and decided to just add all the remaining rfs drives to the excluded list. This seems to be working well and I don't need to worry about the disks being written to during the rsync copy to the swap disk.

     

    One other thing I am curious about is Step 17. Is there a good and QUICK way to check that everything is fine? Should I just pick a few random files and compare checksums?

  15. I'm in the process of a massive upgrade to my Atlas clone. I've already upgraded from 5.0.5 to 6.7.0 and replaced a couple failing drives in the nick of time (I've been really lucky). Now I'm in the process of converting all my disks from rfs to xfs. I've done 2 of 10 so far using the mirroring method described here.

     

    As with the original Atlas, I have UnRAID running as a guest on an esxi 5.1.0 host along with an ubuntu guest that runs plex. This ubuntu guest along with a local printer (for scans) and an IP cam are the only users that have access to the UnRAID shares. 

     

    When I did the first 2 disks I shut down the ubuntu guest, turned off the printer and unplugged the cam to make sure nothing was writing to the array during the rsync operation. Obviously this means I can't use plex during this process. My question is, rather than only adding the swap disk to the excluded disk(s) list under 'global share settings' as instructed to do in the wiki, could I also add the source disk and be assured that nothing is written to the source disk during the rsync operation?

     

    I have no VMs or dockers running on UnRAID (yet).

  16. I'm currently at step 15 of the mirroring method process for converting the file system of my first drive (disk10). I started with 10 disks formatted using RFS, added a disk11 and formatted it using XFS and used rsync to copy all data from disk 10 to disk 11. I've swapped the drive assignments of disk10 and disk11 (swap) and clicked on the disk names to swap the file system formats, however, both say "auto" for file system. 

     

    I just want to make sure I don't mess this up. Should I set disk10 to xfs and disk11 to rfs? Since the other 9 disks are all set to auto, do I need to go through and set them to rfs as described at the end of step 11? Thanks.

  17. On 5/31/2019 at 3:33 PM, Frank1940 said:

    OK, I forgot about that part of the procedure!  For those interest, here is that reason:

    As I recall, to unassign Parity2, simply stop the array and unassign the drive.  Now restart the array and you are done.  You should also realize that there is no point in swapping in the new 6GB Parity2 drive until you have finished the conversion.  You just going to unassign it and when you reassign it, Parity will have to be rebuilt on the Parity2 drive at that time anyway.  

     

    Another point.  After you unassign Parity2 in preparation to doing the conversion , be sure to run a non-correcting parity check as the first step.  If you don't have zero errors, stop at this point and take a deep breath.  

     

    EDIT: I found it most helpful to built a table of the steps with the drive designations involved for each step.  I then checked off each step as I went along.  Also, I used the console to do the conversion as using a SSH session can result problems if the session gets terminated.  

     

    Ok so I finally finished preclearing new drives and I’m ready to proceed.

     

    I have a 6TB parity 1 drive, 3 TB parity 2 drive, and 10 x 3 TB data drives. Drive 10 is failing and I want to replace it with a 6TB drive and then proceed to converting all the drives to XFS using the “mirroring” method, which requires there to be only one parity drive.

     

    I just unassigned the 2nd parity disk, powered down, removed the disk, powered on, tried to assign a precleared 6TB drive to disk 10 and it said invalid configuration and that parity 2 was missing. Is there something else I need to do to have unraid forget about parity 2?

  18. 1 hour ago, Frank1940 said:

    You should also realize that there is no point in swapping in the new 6GB Parity2 drive until you have finished the conversion.  You just going to unassign it and when you reassign it, Parity will have to be rebuilt on the Parity2 drive at that time anyway.  

    Yea I'll add the 2nd parity after I finish all the mirroring. Thanks for the help.