Jump to content

itimpi

Moderators
  • Posts

    20,734
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Fair enough, but with the correct settings it will replicate the built-in 6.10.3 functionality.
  2. An alternative is to use the Parity Check Tuning plugin that DOES work with the 6.10.3 release (as well as earlier releases) and gives you even more control over the process than the built-in functionality.
  3. Yes. The share level setting is designed to handle the case of array drives getting full, not a pool/cache getting full. In the past there was a time when the higher of the share level setting and the pool setting was (incorrectly I believe) applied to the pool/cache but I think this is no longer the case.
  4. This is what I would recommend. No reason not to be running the extended test on both drives in parallel as the test is completely internal to the drive. The process for rebuilding the drives is covered here inthe online documentations accessible via the ‘Manual’ link at the bottom of the GUI but it would be a good idea to wait until you have gotten feedback on the diagnostics after the extended tests before going ahead with that.
  5. I agree -but there is no reason that help text cannot mention overflow to the array with correct share settings (if just for consistency).
  6. This is a frequent recommendation for getting back to a default configuration. If you feel a bit paranoid (no reason not to be when protecting your data) you always have the option of renaming the original file rather than deleting it.
  7. I guess that help text could do with an update to make this clear. I think it is mentioned on the Share Settings page but not here. Personally I would like the default for pools/cache to not be 0 so new users are less likely to get bitten by this issue, but that is a different conversation.
  8. Most of the time the upgrades are painless but is always worth taking precautions against anything going wrong. Make sure you have a backup of your flash drive before attempting the upgrade as you can then easily revert by copying the backup back onto the flash drive. It is a good idea to turn of auto-start of the array until you have done an initial check after the upgrade. Temporarily disabling the Docker and VM services is also not a bad idea. The one item that most frequent lay causes problems is if you have VMs with hardware pass-through as the IDs of the hardware can change. In the worst case you can find an ID associated with a GPU now ends up assigned to a HBA. Make sure that you do not have my VMs set to auto-start until you can check the passed through hardware.
  9. All recent versions of Unraid REQUIRE you to have a password for the GUI, and it should be set via the GUI to ensure it is persistent. If you have a password set and you are not being prompted for it then that means the browser must be caching it.
  10. You should have a Minimum Free Space value set for the cache (click on it on the Main tab to get to this setting) to stop it getting completely full. Ideally this value should be something like twice the size of the largest file you expect to write (or larger). When the free space on the cache drops below this value then for subsequent files Unraid will by-pass the cache and start writing directly to the array. If you do not set a Minimum Free Space setting then Unraid will keep selecting the cache for files and If they will not fit you will get out-of-space type errors occurring.
  11. It depends what you want for the final destination? Using Yes means you want the files to eventually end up on the array. using Prefer means you want them to end up on the cache. You can click on the text in the GUI for this setting to get more detail.
  12. If this is via splitters then it can definitely cause issues as you can end up under load trying to draw more current than can stably be delivered over a single cable. If you are using splitters the molex->SATA seem to be less problematic.
  13. FYI: Only the zip was required as all the other files you posted are already in the zip.
  14. No - what you have just done erases all its contents beyond any chance of revovery. What was recommended was to instead try and mount it as an Unassigned Device as that might have mounted OK and you could then have copied files off it instead of trying to sort out the Lost+Found folder on the emulated disk.
  15. You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  16. If you use ssh then the ‘diagnostics’ command at the command line will put the results into the ‘logs’ folder on the flash drive.
  17. For this particular Use Case it might be better to set the Include option for the share to only use the 10TB drive so the Allocation method is then irrelevant. Note however this will not automatically move files already on the array - it will only apply to new files you copy to the array and existing files would have to be moved manually.
  18. Running an xfs_repair on the emulated disk2 will not remove the disabled (red ‘x’) state. Disabled status can only be removed by doing a rebuild. However it can clear a disk showing as unmountable. You did not mention whether the xfs_repair output indicated errors or if completed successfully. If not have you restarted the array in Normal mode and does the emulated disk2 now mount? If so is there a lost+found folder on it and if so how much content? Posting new diagnostics might be a good idea.
  19. Another possibility is the built-in WireGuard which I know definitely runs when the array is not started as I have used that myself.
  20. No. Unraid is specifically designed to load into RAM from the USB stick on every boot and subsequently run from RAM and use the USB stick for its licence and storing settings.
  21. Only for reads. If appdata is 0n the array then all writes are slowed by the requirement to keep parity updated as well. This can cause problems for two reasons: if the cache gets really full then it is more likely to get file system level corruption. Unraid picks the target drive for a new file before it knows how big it is going to be. If it subsequently fails to fit you get a write error (I.e. Unraid does not change its mind). You need a Minimum Free Space on the cache that is larger than the largest file you expect to write to avoid this scenario and gracefully switch to bypass the cache and using the array when the free space drops below this value.
  22. Note that this is an OR option, and that mover will try and move anything off the array and onto the cache so it would not make sense to do this. nothing wrong with having appdata as Use Cache=only once you have ensured it is all off the main array. The reason that many people use the Prefer option is that allows for overflow to the array of the cache gets full, and auto-move back if space later becomes available. If using the Prefer option then make sure you have a sensible value for the Minimum Free Space setting on the cache so Unraid knows when to start overflowing to the array.. There is nothing to stop you having appdata purely on the array, but most people do not want this as you normally get much better performance if it is on a pool/cache.
  23. The users in Unraid are normally only for accessing shares over the network, and a Plex container will be by-passing that level. Not tried to do this myself but I suspect you would have to set the appropriate permissions at the Linux level to get the restrictions enforced.
  24. What have you got set for the Minimum Free Space for the pools/cache? I see you have it set for the share, but I could not check your pools since no diagnostics were provided. If not set correctly it may stop Unraid tidily start bypassing the cache/pool as it gets near full.
  25. NoT quite sure which licence you are talking about? The Unraid licence is tied to the USB stick used to boot Unraid and needs to stay plugged in while running Unraid. If you want multiple Unraid instances you need multiple Unraid licenced.
×
×
  • Create New...