Jump to content

itimpi

Moderators
  • Posts

    20,784
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. The ‘nobody’ user is normally the correct user. Have you tried running Tools->New Permissions against a share which has these problems.
  2. There is a SpaceInvader video about this (do not have link to hand) so that is probably worth looking at as there is no way it should be filling up if all containers are correctly configured to point to mapped volumes for their working storage.
  3. I would carefully check you have not got that set in a path mapping for any docker containers. According to the diagnostics the ‘appdata’ share only exists and has files on cache_nvme, disk1 and disk2.
  4. You could put the drive in its own pool and create a share that is exclusive to that pool which you can then select via the GUI. Shame there is not an option to enter the path manually as this would allow for UD drives. I think if it is in a pool and is formatted as XFS that such drives may now be readable in Windows under the WSL sub-system - something I have been meaning to try for myself.
  5. You should be able to just delete the config/network.cfg file from the flash drive and reboot to go back to default network settings.
  6. Start the array in normal mode and it should mount. you will probably find you now have a lost+found folder on the drive which is where the repair process puts any files/folders for which it cannot find the directory entry giving its correct name. Sorting this out is a manual process (although you can at least use the Linux ‘file’ command to find the content type of each file) and you have to decide if it is worth the effort.
  7. I don’t see in theory why it cannot be an Unassigned Device or a pool device.
  8. This suggests you may have something scheduled to run at that time (e.g appdata backup) so worth checking for that. You also seem to have an invalid entry in /etc/cron.d/root so worth checking that out to see what it is.
  9. Yes to Maintenance mode, but no need to use terminal as you can do it from the GUI by clicking on the drive in the GUI as described in the link and this is recommended as less error prone.
  10. You have Export=No which means they are not available via the network. for security reasons this is the default setting and you have to explicitly set it to one of the other options.
  11. The syslog in the diagnostics only shows what happened since the last boot. You should enable the syslog server (probably with the option to Mirror to Flash set) to get a syslog that survives a reboot so we can see what leads up to a crash. If using the mirror option the syslog file is stored in the 'logs' folder on the flash drive.
  12. You should enable the syslog server (probably with the mirror to flash setting enabled) to get a syslog that survives a reboot so we can see if there is anything being logged leading up to the crash. The syslog in the diagnostics only shows what happened since the last boot.
  13. Your Split Level setting is too restrictive. Split Level will over-ride the other settings in the event of contention on selecting a drive.
  14. The flash drive can become corrupted - but that can easily be rebuilt with the standard OS files. There is always the possibility of configuration files becoming corrupted.
  15. I think it is really trying to say that if Unraid successfully boots the UnraidOS files cannot be corrupt as the boot process checks that the archive files match their checksums as part of the boot process of loading the OS into RAM. If the archive files actually DO get corrupted and the boot process thus fails then it is easy to rewrite good copies to the flash.
  16. Do not really know. In the 6.13 release the current Unraid array type will become just another pool type but whether that will add supporting trim I have no idea.
  17. I suspect that simply adding the bcache module to the kernel is not all that would be involved or it would probably already have happened. For instance how would it interact with the Unraid User Share system? Having said that if such issues could be resolved it might be an attractive feature and at the very least it would replace the need for the current Use Cache=Yes option on User Shares.
  18. That only applies to ZFS pools - The documentation explicitly excludes ZFS drives used in the main array. You could use a dummy flash drive to satisfy the (current) restriction that there must be at least 1 drive in the main array and then run all the SSDs in a pool (using BTRFS or ZFS) as in principle pools can be trimmed. However even doing this you need to check that any HBA you use supports trim.
  19. Probably not. It is not something I have tried. Are you aware that disks in the main array cannot be trimmed? This might lead to a performance drop-off over time depending on the SSD models in use and whether they need periodic trims to maintain performance.
  20. Can you successfully stop the array as opposed to shutting down or rebooting the server. Might be easier to determine what is causing the issue if the server is still running while the problem occurs. I had a similar issue and I found it was due o a long running rsync process that was doing a backup that prevented the umounts of the drives
  21. The syslog snippet you posted is quite short and shows all drives spinning down and then a couple spinning back up. We might be able to tell more if we had the full diagnostics zip file so we can check how things are set up and if the syslog in them covered a longer period so that there are more occurrences of your issue.
  22. Just to point out that performance of the array is measurably lower once you have a parity drive so if you are not really interested in the protection it provides you need to decide if the trade off in performance is worth it for your peace of mind on having no exclamation icons.
  23. You could try uninstalling the Mover Tuning plugin and see if then woks as expected.
  24. You can call it as many times as you like without problems. Every time it is called it simply refreshes the cron entries from what any part of Unraid or plugin has set up to say it wants as entries. If there are no changes then it will have no effect. I know because my Parity Check Tuning plugin can call it at any time depending on the level and type of array activity it detects to adjust the timings of some of its internal tasks that are handled via cron.
×
×
  • Create New...