Jump to content

JorgeB

Moderators
  • Posts

    67,459
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. If it was a general problem more people would be complaining, did it error out on the first docker install?
  2. Do you mean something like this? It's included in the GUI, toggle is highlighted on the upper right.
  3. Honestly no idea, was just trying some of the mount options to see if any of them made any difference.
  4. It can depend form system to system and how it's used, in some case it might not be noticeable or even perform better. No need.
  5. You should recreate it instead: https://forums.unraid.net/topic/57181-docker-faq/?do=findComment&comment=564309
  6. Only been 3 hours but I can see from my previous stats posts that before doing this the average writes to cache on the last 36 days were 2.78TB per day, last 26 hours it was 3.6TB, which is a little higher than average, but that's 139GB/H, since the change and for the last 3 hours it's writing on average 28GB/H, so while not ideal it's a fivefold decrease, not bad, if the SSD was going to last 5 months before, it's now going to last 2 years, I'll take it.
  7. First suggestion is to please post the diagnostics (tools -> diagnostics) after it crashes.
  8. What part do you need help with? Backup/restore is just copying the data to the array or another device then than restore it, to re-format change the filesystem to a different one, format, change back and re-format. There are also some btrfs recovery here options if needed, i.e., if the fs goes unmountable.
  9. Cache filesystem is corrupt, best bet is to backup, re-format and restore any cache data.
  10. I've been playing with the various btrfs mount options and possibly found one that appears to make a big difference, at least for now, and while it doesn't look like it's a complete fix for me it decreases writes about 5 to 10 times, this option appears to work both for the docker image on my test server and more encouragingly also on the VM problem on my main server, and it's done by remounting the cache with the nospace_cache option, from my understanding this is perfectly safe (though there could be a performance penalty) and it will go back to default (using space cache) at next array re-start, if anyone else wants to try it just type this: mount -o remount -o nospace_cache /mnt/cache Will let in run for 24 hours and check device stats tomorrow, on average my server does around 2TB writes per day, current value is: But like mentioned it's not a complete fix, I'm still seeing constant writes to cache, but where before it was hovering around 40/60MB/s now it's around 3/10MB/s, so I'll take it for now:
  11. Also, But why not do it? There's a reason we ask for it, if you get bad results with a single stream iperf test you likely also get bad results with a single SMB transfer. This isn't useful at all since it's not a single stream. Another thing you can test is user share vs disk share, user shares always have some additional overhead, and some users are much more affected then other by it, so try transfer to/from a disk share and compare.
  12. Unraid works with any hard drive, as long as it's supported by your hardware and is using the standard 512 or 4k sector sizes.
  13. See here: https://forums.unraid.net/topic/93432-parity-disk-read-errors/?do=findComment&comment=864078
  14. Please use the dedicated plugin support thread:
  15. You could try disabling all dockers and let it run for a few days, if all OK then start enabling one by one.
  16. If that was true everyone would have that issue, and most, including myself can read/write at normal speeds using SMB, though there still might be some setting/configuration that doesn't work correctly for every server/configuration.
  17. Some users are probably still using the SASLP/SAS2LP without issues, but there are so many others with problems that we don't recommended using them, still they still might work well for some, at least for some time.
  18. There's filesystem corruption on cache, best bet is to backup, re-format and restore cache data.
  19. Maybe not, you can always try and see, it will depend if RAID controller writes something to the MBR and/or uses a non standard partition, if it doesn't work you can always revert back to RAID mode and the disks will mount again (just never formatted them when in AHCI mode).
  20. It doesn't make any difference for me, and I would guess some affected users are using the default system share, which defaults to NOCOW.
  21. The only issue with SSDs on the array is the lack of trim, but if you use reasonable quality SSDs should be fine, also I recommended using a faster/higher endurance SSD for parity, like an NVMe device, but if I understood correctly you won't be using parity. I've been using a small SSD array for a few months and it's still performing great, basically the same as it was when new, and I've written every SSD about 4 times over (parity 20 times over).
  22. Run a single stream iperf test to check network bandwidth, but it's normal that transferring to RAM is a little faster due to lower overhead/latency.
  23. Disks look fine and since the emulated disks are unmountable best option here is IMHO doing a new config and re-syncing parity, still good idea to make sure the old disks are mounting correctly before doing the new config, so: -stop array -unassign disks 3 and 4 -start array -now use UD to mount the old disks in read-only mode -if they mount correctly and data looks correct do a new config and re-sync parity. If you need help with the new config please ask.
×
×
  • Create New...