Jump to content

itimpi

Moderators
  • Posts

    20,789
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. If you have an EFI- folder on the flash drive then you need to rename it to remove the trailing - character for UEFI boot to work.
  2. It might if the reason was that because you were not using a /mnt/user path with your docker containers then you were not able to take advantage of the minimum free space value to stop the cache pool filling up. Having said that since your docker.img file is set to be 35GB and it is set to be on the cache pool that is in practice the minimum you should expect to be used. Since mover will not move open files you may also have a docker container running that is keeping file(s) open in the ‘appdata’ share - you would need to drill down on the space used there to find out.
  3. Any errors on other drives during the sync operation will have caused corruption on the drive being rebuilt as the rebuild process requires all other drives to be read without error to avoid corruption on the rebuilt drive.
  4. Not quite sure what you are asking. The information in the link describes how to get standard diagnostics via GUI or command line. The syslog in the diagnostics is the RAM copy and only shows what happened since the reboot. It could be worth enabling the syslog server to get a log that survives a reboot so we can see what happened prior to the reboot. The mirror to flash option is the easiest to set up, but if you are worried about excessive wear on the flash drive you can put your server’s address into the Remote Server field.
  5. The syslog in the diagnostics is the RAM copy and only shows what happened since the reboot. It could be worth enabling the syslog server to get a log that survives a reboot so we can see what happened prior to the reboot. The mirror to flash option is the easiest to set up, but if you are worried about excessive wear on the flash drive you can put your server’s address into the Remote Server field.
  6. If the following are met: You are using Unraid 6.12.x All your 'appdata' share is on the 'cache' pool You have enabled Exclusive shares under Settings->Global Share settings. then you can get the same performance using /mnt/user/appdata/somename as using /mnt/cache/appdata/somename.
  7. Should probably be enough - only you can tell what files you tend to download. Note, however, if you have anything writing directly to the cache drive (e.g. as /mnt/cache/somename) then it will by-pass this check. It only applies if writing to a User Share that is set to use caching (and not set to only be on the cache pool).
  8. Make sure that you have set the Minimum Free Space for the cache drive to be larger than the biggest file you expect to write to it for caching. Then when the free space falls below that Unraid will start by-passing the cache and writing files directly to the array.
  9. Ok - that is the long way around in my opinion. Going via Apps->Previous Apps does this more conveniently I think and is functionally equivalent..
  10. Not unexpected as that is what you configured it to be in Settings->Docker
  11. You only get the saved settings if you use Apps->PreviousApps
  12. Yes- although it is a case of putting the appdata share contents back onto the nvme drive (your wording implies putting into the docker.img file which would be wrong)! Not completely sure. It is certainly misbehaving at the moment, but do not know the cause.
  13. That test increments in 10% intervals and you can assume something like 1 to 2 hours per TB.
  14. If the docker.img file is corrupt then you should assume that you need to reinstall all of them. Easy enough to do via Apps->Previous Apps which puts back the binaries with previous settings intact. However you will first need to put back the 'appdata' share contents if you want any of them to remember their state.
  15. Your syslog is full of messages along the lines of: Feb 7 18:24:17 Paradyne kernel: critical medium error, dev nvme0n1, sector 193991280 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Feb 7 18:24:17 Paradyne kernel: I/O error, dev loop2, sector 33488120 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 2 where the first line suggests a problem with your nvme drive, and the second one that docker.img (which is on the nvme drive) is corrupt so any docker will probably fail.
  16. That is the device name and is not used by Unraid to identify drives (instead Unraid uses the drive serial number). Something else must be going on.
  17. You could delete the config/vfio.cfg file on the flash drive and reboot.
  18. When you restart the array in normal mode that disk should now mount.
  19. Ext4 is not a supported file system type for the main array. It can, however, be mounted using the Unassigned Devices plugin to facilitate copying files off it.
  20. This will be possible when (as expected) the need to have at least 1 drive in the main array is removed. You could also do it now by upgrading the licence and using a dummy flash drive as the required array drive.
  21. You need to distinguish between 'pool' which is a a group of one or more disks that are not part of the main array and 'cache' which is functionality associated with a share (although there is confusion about the fact that for historical reasons people often have a pool called 'cache'), and you can have multiple pools. If you only set up 'primary' storage for a share then there is no caching involved and you can specify whether that primary storage is a pool or the main array. In the scenario you describe above then the primary storage would be the pool you specify and all files for such shares would be on that pool. Many new uses are now going that route. In the future we expect the need to have a dummy usb drive in the Unraid type array will be removed. A multi-drive pool has to be either btrfs or zfs. You can only use xfs on a single drive pool.
  22. You have the file ‘mounts’ which should not be there. You must have something creating it. It is also not normal for the RecycleBin folder to be there - but that may be an artifact of whatever is causing the ‘mounts’ file to appear. There should be nothing manually mounted directly under /mnt. if you are manually mounting anything then it should be under /mnt/addons. If you use Unassigned Devices plugin to mount anything it will appear under /mnt/disks or /mnt/remotes as appropriate.
  23. Yes. There is a known issue with performance when ZFS is used on array drives. This does not apply when ZFS is used in a pool.
  24. They are text files so can be opened and examined easily enough.
  25. They will if you use Apps->Previous Apps to select and install them.
×
×
  • Create New...