itimpi

Moderators
  • Posts

    19648
  • Joined

  • Last visited

  • Days Won

    54

Report Comments posted by itimpi

  1. It is easy to recreate the docker.img file with its previous settings intact, but the fact that it went missing suggests something else is going on.   I suggest you post your system's diagnostics zip file in your next post in this thread so we can have a look and to get more informed feedback.

  2. 2 minutes ago, dhenke1690 said:

    Seems like the minimum free space should be storage (i.e. Primary vs Secondary) dependent. I would like to leave 500GB free on my 12TB drives in the Secondary Storage but I don't want to leave 500GB free on my primary storage which is an SSD cache drive. Is there a way to do that?


    Unfortunately not.   It has been suggested quite a few times as being desirable but nothing has materialized.. 

  3. This is because you used Krusader and used a 'move' option rather than copy/delete.    It means you ended by-passing the User Share system and hit the issue described in this section of the online documentation accessible via the Manual link at the bottom of the Unraid GUI.  In addition every forum page has a DOCS link at the top and a Documentation link at the bottom.

     

    You would not have gotten this behaviour if you had used Dynamix File Manager to perform the operation as it always uses a copy/delete strategy just to avoid issues like this (amongst others) that can happen even when you request a move.   It would also have worked as expected if you had used a copy/delete strategy from within Krusader.

  4. 14 minutes ago, timocapa said:

    Perhaps - visually not represented in any way though, or rather mispresented. Looks like it does what I wanted it to do, then does something else, so I'm considering it a bug

    I guess as soon as the location is changed ideally the delete checkbox should be disabled?

  5. 1 hour ago, Lukáš Řádek said:

    Thanks. A situation that I feared with parity system is how to know if the parity error actually means that there is bad data on the parity drive (for example due to unclean shutdown) or on the data drive since that is the one that has been corrupted.

    You cannot.   Parity is not that clever - it only knows that something has gone wrong somewhere.

     

    The sequence of the way that the writes happen that as long as a drive has not failed a lost write to the parity drive is far more likely than the data drive so this is the assumption that is made.   The only way to make sure that a data drive does not have bad data is to either have checksums for the data or use BTRFS or ZFS as the file system type as they have built in check-summing.

     

    This is one of the reasons we recommend that the scheduled parity check is set to be non-correcting so that you only run a correcting check when you think none of your data drives has a problem.

  6. Just now, hawihoney said:

     

    I can add update_cron as a user script (that's fired during array start) to the User Scripts plugin. This would be a workaround.

     

     

    I agree, but it would be nice to know going forward whether firing this when needed should be the responsibility of plugins that need it, and if so they can then be updated to no longer need the workaround.

     

    Perhaps many people will not need the workaround anyway if they have a plugin (such as my Parity Check Tuning plugin) that issues this as part of its install process as the update_cron command is not plugin specific - it should pick up all cron jobs that any plugin has waiting to be activated.

  7. There is also the possibility of a plugin that wants cron entries to be added to run this as part of installing the plugin (I know I do this for the Parity Check Tuning plugin).   

     

    Not sure whose responsibility it should be to run this, but it would never cause problems to run it redundantly and maybe plugins should not rely it being part of the start up sequence as I can see the 6.13 release needing significant changes to the start up sequence.

     

  8.  

    11 minutes ago, sabertooth said:

    b) How to enable exclusive access without going through arduous process creating a new share and copying close 3 TB of data. 


    should not have to go through anything complicated.    You just need to make sure that there are no files (or folders) for that share on the array or on any other pool and the share has no secondary storage set.  If any of these conditions are not met the Exclusive Share setting is automatically set to NO.

  9. 1 minute ago, MAM59 said:

    Not fully correct. The speed degregation also happens, if there is NO PARITY Drive at all!

    (I've tested all combinations, ZFS was always bad :-( )

    If that is the case maybe it is an inherent problem when ZFS is used on a single drive file system?    I cannot see why Limetech would have done anything special about ZFS being used in the array?   Be nice to be proved wrong though.

  10. 6 minutes ago, B_Sinn3d said:

    I am seeing this as well.  I was planning on converting my current XFS disks to ZFS to take advantage of some of the ZFS features but I think I will wait until this gets sorted. I don't want to move around 70TB of data between my disks using unBalance with write speeds of ~55MB/s.

    I have a feeling that ZFS may have inherent performance issues when in the main array because of the way Unraid parity is handled.   I would love to be proved wrong but I think users should be aware there may be no easy solution.

  11. 2 hours ago, Lukáš Řádek said:

     

    Sure. But can I consider this case fixed by running the xfs file system repair? Or does it somehow affect parity?

    If you run a xfs file system repair via the GUI this will update parity as it is doing any repair.   

     

    Having said that it is possible that a parity check may find a small number of errors that need correcting after an unclean shuitdown as some writes at the unclean shutdown stage did not make it to the parity drive as well.

  12. On 7/6/2023 at 2:38 PM, Aspect said:

    Just tried creating a new usb with 6.12.2 fresh config as suggested and I’m getting the same error I’m only booting to the first line bzroot then the server reboots and does that loop over and over. I let it do this for 30 minutes and it never got past the bzroot line. 

    This is could be a case of the system being set to UEFI boot but the flash drive not set up to support this.