Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Report Comments posted by trurl

  1. 31 minutes ago, itimpi said:

    Changing slots still does not necessarily mean you want the shares to be set up any differently.  

    Of course there should be no changes to user share settings, and there is no change.


    I always discourage sharing disks on the network anyway.


    Retaining disk share settings when those settings apply to a specific slot instead of to a specific disk and its data doesn't seem any more correct to me than to just have default settings applied to all disks and then let the user make any changes needed.


    I agree though that either way can be confusing for the user and making it more visible does make sense.

  2. Probably would have made more sense to just post in the release thread, and you haven't given much information.


    Have you read the entire release thread?


    Is the VM using vdisks or are you attempting to passthru actual disks to the VM?


    Downgrading to Minor

  3. 2 hours ago, Thorsten said:

    My VMs do not use a virtual disk as file, they use a physical disk.

    23 minutes ago, turnipisum said:

    Yep same here! It's a kernel issue from what i know. 3 choices go to beta 25, use network mapped drives or switch them to vdisks.



  4. 3 minutes ago, Squid said:

    No change required.  (Assuming that you have a cache pool named "cache", which is the default)

    If you are using another pool, just substitute its name instead of "cache". For example, I have a pool named "fast" and my appdata is at /mnt/fast/appdata.

    • Thanks 1

  5. 9 minutes ago, Frank1940 said:

    Personally, I would replace the parity drive first with a new larger one and then replace each drive with a new one and use the rebuild using parity process.   See here:


    I guess it depends on what @cdoublejj means by "blowing away my array". In any case, this is getting off-topic for this thread so for more advice start a new thread in General Support.

    • Like 1

  6. 13 hours ago, cdoublejj said:

    so i've had my unraid for a few years now and am looking at blowing away my array and replacing all my drives and starting new.


    can i use this new mover function?  ....other wise i was going to copy all my data over to some 10TB external drives over the network and copy it back once i rebuild my array (any my shares?)


    does share configuration go away when i delete my array?

    The .cfg file for each share is stored on flash in config/shares. For a share to actually exist, though, it must have a top level folder on pool or array named for the share. Top level folders are always shares, but any top level folder that doesn't have a .cfg file has default settings (highwater, split any, minimum 0, include all, cache-no).


    mover still only moves from array to pool, or pool to array, so unless you have a very large pool it isn't likely to help with moving your array files somewhere.


    Unassigned Devices can mount external drives so no need to do it over the network. As long as you have a single (possibly empty) array data disk you can start Unraid and use plugins and even dockers and VMs.

  7. 5 hours ago, DZMM said:

    realised a bad idea as new files will get added to /mnt/apps/appdata.  I'll try a different way

    You can map the appdata for a specific application to the actual pool instead of to the appdata user share. So, you could use /mnt/cache/appdata in the docker mappings for some apps, and /mnt/apps/appdata in the docker mappings for other apps.


    And as long as appdata user share is cache-only, it will be ignored by mover whether the appdata is on cache or apps.

  8. 3 minutes ago, Entxawp said:

    Jesus the beta is really buggy for me, It just removed my primary cache drive, can anyone help me as how to get I t back/ how to downgrade back to 6.9.3 stable?

    Unmountable, possibly unrelated to the beta.


    Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.

    • Thanks 1