Jump to content

itimpi

Moderators
  • Posts

    20,780
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. You could, but if you do not know what is causing them the culprit could just end up consuming the extra space you give it. In a sense you are just 'hiding' the underlying problem but if you do not mind using the extra space then it may be the easiest thing to do.
  2. It all depends on how the developer built the container and what binaries are included in it. You could try asking in the support thread for a particular container.
  3. The normal recommendation is at least the size of the largest file you want to be cached. You may want to add a safety margin onto that. The key point is to stop the pool getting completely full as that tends to cause issues.
  4. Using the Container Size button on the Dockers tab might give you a clue. Failing that it is just a case of examining your dockers one-by-one to work out which ones might be creating data dynamically and checking where they are writing that data to on the basis that anything not mapped to an external location is going to be written internally to the docker.img file.
  5. Chances are very high that you have a container writing internally to the image when it should be writing to a location mapped to the Unraid host. As an example this frequently happens if the Plex transcode temporary location is not mapped externally, but it could be something else.
  6. You can switch Exclusive mode for a share off and on although it may (I am not sure) take a reboot to activate the change.
  7. Not directly relevant to your issue, but if you install the Parity Check Tuning plugin then even if you have it set to be disabled it will still mean that you get more detail in the Parity History and the timings there should be accurate as it uses a different mechanism to core Unraid to track what they should be.
  8. I do not think that Unraid will let you add a new parity and a new array disk in one step so you would need to do them one after the other (in any order).
  9. If the Extended SMART test fails then the dtive should be replaced.
  10. That is normal. It indicates how much of the ZFS Arc us being used.
  11. This will never do any harm Note that passing memtest is not a definitive proof that you have no memory issues, whereas failing it is. At that level Unraid is just Linux (Slackware) so should be no more sensitive than other Linux systems.
  12. Not that I know of. Using an Exclusive share is more efficient so would probably reduce ioWait as well.
  13. If that is the only place they are located (I.e. nothing set for secondary storage) then the answer is yes. Note that to get Exclusive mode you first need to make sure this set to be available under Settings- Global Share settings, and then also enable it for the relevant shares.
  14. Do not see much point in this. Just set the spindown delay to be a short delay at the Unraid level.
  15. The idea is that by-passing the Unraid Fuse layer that supports User Shares then there is better performance. as was mentioned if you are on a 6.12.x release then you can get the same performance by setting up the share to run in Exclusive mode without having to change paths.
  16. As long as you have Dynamix File Manager plugin installed (recommended) then there is a Permissions button when browsing drives/shares
  17. /dev/sda1 is your flash drive. It is not clear why it is having problems - it could be the drive itself, or the USB port it is plugged into or something else less obvious. Steps you might want to take that might help are: plug it into a PC to run a check on it. Let Windows fix any errors found Download the zip file for the Unraid release you are using and extract all the bz* type files overwriting those of the same name in the root of the flash drive. This can help if there are files that are on marginal sectors so sometimes fail to read. Consider having another USB flash drive ready in case your current one is about to fail.
  18. Click on the link which then tells you how to do this.
  19. This is all configurable. You can individually configure for each share which pool (if any) is used for caching purposes. You can also configure where docker containers and VMs should be placed. You can change things around later if you change your mind about where things should go, although depending on the change made you may then have to manually move some data to match the new setup.
  20. You cannot do that using ZFS. You would have to replace both drives at the same time which means any existing data first has to be copied elsewhere. This is one reason for using BTRFS in pools - it is much more flexible about adding drives (particularly of mixed sizes).
  21. Did you try clearing the cache in Firefox? Not unusual for this to be required after an upgrade to get noVNC working.
  22. You probably want to let it run a bit further. The reallocated sectors not being 0 is not a good sign but as long as it stays stable the drive could be OK. Also worrying is the Pending Sectors value not being 0 although with any luck in will go back to 0 when the sector is next written.
  23. This is an internal check in the plugin for a state I did not think would occur. Things should still work OK. Were you doing anything unusual in terms of array operations leading up to this? The diagnostics might give me a clue. if you know how to recreate it then I would appreciate it if was possible to get a log with the plugin’s Testing logging mode active so I can see what triggered it.
  24. You should be able to see on the Main tab which drive is ‘sdl’.
  25. Looking at the diagnostics the check was paused because mover was running. Mover and parity check very badly affect the performance of each other so you do not normally want them running at the same time. Any idea why mover was running? If you have notifications enabled then you should have received one giving the reason why the check was paused. I also noticed the following: Aug 1 21:30:46 Tower kernel: critical medium error, dev sdl, sector 1325225704 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 3 Aug 1 21:30:46 Tower kernel: Buffer I/O error on dev sdl, logical block 165653213, async page read which probably needs looking at.
×
×
  • Create New...