Jump to content

itimpi

Moderators
  • Posts

    20,699
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. I think it must be something local to your system that is logging you out (assuming you are keeping the browser active to Unraid). I never get auto-logged out if I leave a browser tab open to the Unraid GUI.
  2. I wonder if it is better to add for a ‘Custom’ type entry that supports crontab type scheduling. That would allow a single option to allow for all scheduling options that are non-standard.
  3. You can also download a more recent version that is free for personal use from memtest86.com. Needed if you UEFI boot or have EEC RAM which the Unraid version does not support.
  4. Interesting (not even internal ones?) - why was USB2 mentioned then I wonder? Still the question is still valid as to why change ports from one that works to one that does not.
  5. If you are having trouble with the USB Creator utility then you may have better success using the Manual install method.
  6. Why do you even want to do this? USB2 ports seem to be far more reliable when booting and since after initial load Unraid runs from RAM there is no significant performance advantage to using USB 3.x ports.
  7. Although you can there is not normally any advantage over using a pool specifically set up for this purpose,
  8. Are you stopping the VM and Docker services before running mover as they can hold files open which stops them being moved, particularly in the appdata and system shares. Note that normally you WANT the system snd appdata shares to be on a pool to maximise performance of docker containers.
  9. If FCP adds a check on Minimum Fred Space being 0 perhaps it would make sense to initially only do the check for pools as completely filling a pool seems to be the commonest problem with the most severe consequences. On an array drive it tends to only be a problem if the Fill-up allocation method is being used. On array drives the commonest issue seems to be too restrictive Split Level setting causing drives to fill up which would be much harder for FCP to sensibly detect.
  10. The only share you have set to move files to the array automatically is the ‘isos’ one (use Cache=Yes) and it already has all of its files on the array. All the others are set to Use Cache=Prefer which indicates you want those files kept on the cache pool if at all possible. You have also not set a Minimum Free Space value for the pool, so that when the free space falls below this value Unraid starts writing new files for shares with either a Yes or Prefer for Use Cache setting directly to the array bypassing the cache pool. Not having that set allowed the cache pool to completely fill up.
  11. Personally I think ANY sensible default would be an improvement. Those who care will probably already have set a non-zero value.
  12. That means you have something (almost certainly a docker container) explicitly referencing /mnt/cache so that folder is getting created in RAM.
  13. It seems to me that there is never a reason you WANT the Minimum Free Space value for a User Share or a pool to be 0 (unless I have missed some special action triggered by 0 ). I would suggest that if it is found to be 0 then Unraid should change it to some sort of non-zero default? A suggestion would be something like 10% of the total size for a pool. Not sure of the best value for a User Share so suggestions welcome? Maybe something in the range of 10-100GB? To get the current behaviour a User can always then set this to a very small non-zero value. Resetting it back to 0 should then trigger setting it back to whatever default is set by 'Unraid. Thoughts?
  14. Files in the lost+found folder are ones where the repair process could not find the directory entry to give them their correct name (so Unraid does not know where they belong). If you can sort them out then you can move them back to the correct share, but often they have cryptic numeric names so it is not that easy. Recovering from backups often tends to be easiest (if you gave them). If you NEED to sort out the files in list+found then the Linux ‘file’ command can be useful in determining what type a file is (and thus it’s probable extension).
  15. I think that means the command probably did not complete. Running again should be OK.
  16. Are you saying that you formatted the disk in UD after the Preclear, and having done so pressed the Mount button for it in UD?
  17. Unless you regularly check for errors via the Unraid GUI then not having notifications enabled to,advice you of problems as they arise is a recipe for potential data loss. It means that you can have multiple errors creep up on you that you do not notice until they have gone beyond the level of being able to recover without data loss. One of the features of Unraid is its ability to continue working after a drive failure and if you are not monitoring for this you will quite likely not take appropriate remedial action in time.
  18. This statement is not true! You can transfer more data than the size of the cache disk as long as: you have the Use Cache setting for the shares to end up on the array but go via the cache set to Yes, or Prefer for files you want to finally end up on the cache if room permits but go the arrsy otherwise. You have the Minimum Free Space value for the cache set to be larger than any single file to be transferred. A typical recommendation is twice the size of the largest file to give some headroom. You do NOT want the default value of 0 for this setting. Personally I do not think a value of 0 should be allowed by Unraid and the default should be something like 10% of the size of the pool as a better default that would suit most people. Files are being transferred one at a time (which would be typical). If multiple files are being transferred in parallel the the Minimum Free Space value must be adjusted to a larger value to allow for the number of parallel transfers that can occur. Aa long as this is true, then when Unraid sees the free space fall below the Minimum Free Space setting for the cache it will start by-passing the cache for new files and write them directly to the array. It is true, however, that there is not much point in having a cache drive that is smaller than the amount of new data you write between mover runs. For the initial load onto a new Unraid system it is advantageous from a performance perspective to not have a parity drive assigned as most files will probably end up by-passing the cache so you do not want the performance penalty of updating parity (unless you are prepared to accept it to keep the data on the array drives protected from the outset).
  19. Do you mean it ends up powered off or just unresponsive? If powered off this suggests an external factor. You can try enabling the Syslog Server to get a log that will survive a reboot.
  20. I was not that clever ! I merely looked at the syslog in the diagnostics and could see the plugin kicking off at that time and then the plugin logging the fact it was stopping the containers. There are times it can be easy to overlook the obvious on the basis it cannot be that simple
  21. Appreciate the sentiment but I already have one in my hand to give me my morning caffeine shot
  22. You have the CA Backup plugin scheduled to start at 6:00 am every day and that will be what is stopping the containers until the backup is finished.
  23. You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  24. No idea I am afraid - maybe somebody else will have an idea.
  25. That is to be expected if you have some files on a pool and the pool is not redundant (single drive or multi-drive and not using a RAID mode that provides redundancy).
×
×
  • Create New...