Jump to content

itimpi

Moderators
  • Posts

    20,214
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  2. If you use ssh then the ‘diagnostics’ command at the command line will put the results into the ‘logs’ folder on the flash drive.
  3. For this particular Use Case it might be better to set the Include option for the share to only use the 10TB drive so the Allocation method is then irrelevant. Note however this will not automatically move files already on the array - it will only apply to new files you copy to the array and existing files would have to be moved manually.
  4. Running an xfs_repair on the emulated disk2 will not remove the disabled (red ‘x’) state. Disabled status can only be removed by doing a rebuild. However it can clear a disk showing as unmountable. You did not mention whether the xfs_repair output indicated errors or if completed successfully. If not have you restarted the array in Normal mode and does the emulated disk2 now mount? If so is there a lost+found folder on it and if so how much content? Posting new diagnostics might be a good idea.
  5. Another possibility is the built-in WireGuard which I know definitely runs when the array is not started as I have used that myself.
  6. No. Unraid is specifically designed to load into RAM from the USB stick on every boot and subsequently run from RAM and use the USB stick for its licence and storing settings.
  7. Only for reads. If appdata is 0n the array then all writes are slowed by the requirement to keep parity updated as well. This can cause problems for two reasons: if the cache gets really full then it is more likely to get file system level corruption. Unraid picks the target drive for a new file before it knows how big it is going to be. If it subsequently fails to fit you get a write error (I.e. Unraid does not change its mind). You need a Minimum Free Space on the cache that is larger than the largest file you expect to write to avoid this scenario and gracefully switch to bypass the cache and using the array when the free space drops below this value.
  8. Note that this is an OR option, and that mover will try and move anything off the array and onto the cache so it would not make sense to do this. nothing wrong with having appdata as Use Cache=only once you have ensured it is all off the main array. The reason that many people use the Prefer option is that allows for overflow to the array of the cache gets full, and auto-move back if space later becomes available. If using the Prefer option then make sure you have a sensible value for the Minimum Free Space setting on the cache so Unraid knows when to start overflowing to the array.. There is nothing to stop you having appdata purely on the array, but most people do not want this as you normally get much better performance if it is on a pool/cache.
  9. The users in Unraid are normally only for accessing shares over the network, and a Plex container will be by-passing that level. Not tried to do this myself but I suspect you would have to set the appropriate permissions at the Linux level to get the restrictions enforced.
  10. What have you got set for the Minimum Free Space for the pools/cache? I see you have it set for the share, but I could not check your pools since no diagnostics were provided. If not set correctly it may stop Unraid tidily start bypassing the cache/pool as it gets near full.
  11. NoT quite sure which licence you are talking about? The Unraid licence is tied to the USB stick used to boot Unraid and needs to stay plugged in while running Unraid. If you want multiple Unraid instances you need multiple Unraid licenced.
  12. In theory you should be able to run the basic NAS functionality in 2GB but some functions might not work well. An example would be that you would have to do any upgrade manually as trying to upgrade via the GUI will fail with only 2GB of RAM.
  13. I suspect that this is a coincidence. The defaults for the Parity Check Tuning plugin changed some releases ago and if you had never clicked Apply on its settings screen the defaults will take effect. Simply make sure the settings are what you now want and hit Apply and those settings will be what are used in the future.
  14. They do not as long as they are at least as large as the largest data drive.
  15. It is probably not that simple. The ‘mnt/cache/lxc’ location is part of the ‘/mnt/user/lxc’ share as the user share is a composite view that includes the ‘lxc’ folder on any array drive OR pool drive..
  16. This is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  17. That is where they are MEANT to end up with the Yes setting. You might want to read the help built into the GUI about how this setting works and the action (if any) that mover subsequently takes.
  18. You are likely to get better informed feedback if you post your system’s diagnostics zip file after making the change to a share. It sounds as if Unraid may be failing to write to the USB drive.
  19. In Unraid parity is real-time. Parity is file system agnostics and works at the raw sector level so if you have a formatted drive as far as Unraid is concerned that disk is not empty as the file system data has to be reflected in the parity data. Once you have created the initial parity data then when you subsequently write data to the array Unraid will only update the parity sectors corresponding to the file just written.
  20. Your config/go file tries to start up the Unraid management system (emhttpd) twice. This is I suspect the source of your symptoms.
  21. The parity will not be valid if any data drive had read errors while building the parity. In terms of getting back to a sensible state: Were there any changes to the data on other drives during the latest attempt at parity build? If not do you have the old parity dtive intact. do you still have the data drive that you replaced with its data intact? It might be worth posting your system’s diagnostics zip file so we can see exactly what the current state of affairs is.
  22. I think this requires MacOS to be loaded, and is not built into the hardware. Unraid runs at the bare metal level so MacOS would not be present.
  23. That means the drive has not been partitioned. UD expects there to be a partition on the drive.
  24. No - the two are completely independent of each other. You basically have to decide which of the following you want: forbid mover to run if a parity check is running via the Mover Tuning settings let a mover start despite a parity check running and let the Parity Check Tuning plugin pause the parity check until mover finishes.
  25. No, you do not need to do this. When the mover runs then if the parity check is not paused and you have the appropriate setting in the Plugin Tuning settings then the parity check will automatically pause when mover starts and automatically resume when it finishes.
×
×
  • Create New...