sirkuz

Members
  • Posts

    79
  • Joined

  • Last visited

Everything posted by sirkuz

  1. Nevermind, had installed on a custom network and those ignore custom port settings I don't seem to be able to change the webui port from 3000. Whatever I set it to it just goes back to 3000. Tried a few times and pulling down fresh. Anyone else have this issue? I already have grafana on 3000.
  2. dug a bit deeper on it and seems like its unraid networking related to the vlan bridge I have on an interface not routing properly. Disregard please (sorry I deleted the question before I saw a response).
  3. Yes, both exist on cache pool which is a pair mirrored 1TB nvme drives.
  4. I am doing some further testing while doing a parity rebuild in safe mode... I have also discovered that simply enabling the docker service results in a performance hit, even with all containers stopped. In my case I estimate it to be around 30MB/s speed. As soon as I stop the docker service I get those 30MB/s back. This could take a lot of time to try and track down if I have both plugins and docker service causing issues with parity speeds. 🤪
  5. Right, trying to track down which might be the culprit next. Right now trying to get my drive updates done as quickly as possible then will have some time to start removing plugins and testing each time.
  6. Did some more testing today on my faster drive machine. I doubled my parity speed by enabling safe mode and gained another 50% speed increase when booting 6.7.2 6.10.0-rc2 safe mode ~100MB/s 6.7.2 safe mode ~150MB/s sometimes seeing as high as 170-180MB/s Restoring 6.10.0-rc2 NON safe mode, containers/vms disabled 45-50MB/s reboot 6.10.0-rc2 safe mode ~100MB/s again
  7. It is unfortunate. At this point it might be worth my time to actually downgrade my fast server that I am upgrading just so I can complete the drive replacements in a more timely manner. 3 days for each is killer when doing 10 drives. I'll send in some feedback directly from my systems for them as well.
  8. Thanks, got that system downgraded. Very similar hardware other than the drives themselves. Same gen cpu ram and same exact controller model. Drives are slower. Downgrading from 6.10.0-rc2 to 6.7.2 my parity check read speed almost double. Seeing as high as 110MB/s on that array compared to the previous 50-60 range.
  9. Sorry, but I am coming up empty trying to find a link to download 6.7.2. Do you happen to have a link to older versions like this? I have a secondary server with the same controller I might try it on first to see if it makes a difference. That system has slower drives already but similar hardware and numbers of drives. Thanks in advance!
  10. I'll give it a go after current sync is done in a couple days to see what the speeds look like. Appreciate the help!
  11. Not sure I can downgrade without breaking a bunch of stuff. Specifically running my containers with ipvlan rather than macvlan. Might be better off rolling a new trial USB and testing that way if I can prevent ruining my current configuration
  12. With it being so slow I am currently living dangerously and doing a two drive upgrade since I have dual parity. Currently at ~55MB/s each around the 46% mark.
  13. If I recall correctly I'll start out around 60MB/s or slightly above before settling in around 50-55MB/s.
  14. This is a LSI SAS 9305-24i running @ PCIe3.0 x8 with 24 sata3 drives connected ranging from 8TB and up. Should have more than enough bandwidth running at that link speed. Definitely was faster previously. Rest of system specs are Supermicro X9DRi-LN4+ with dual CPU E5-2650 v2 cpu and 384GB RAM.
  15. Is anything further being looked into this to address the issue with large arrays? I am myself experiencing painful parity rebuild speeds while upgrading disks in my array from 8TB to 14TB. Each disk I replace costs me 72 hours. It's going to take me a good month or two in order to get through these. I have tried everything I can think of and no issues found in my sys log. Adjusted all tunables to defaults and every which way with no difference. Disabling the docker engine or VMs does not matter (they are all on cache pool anyways). Currently I am on the 6.10.0-RC2 and hoping maybe something else is up the sleeve to alleviate this in 6.10 final.
  16. I am on 6.10.0-rc2 and while running parity rebuilds I am noticing that I still get these errors when leaving the UI open under the latest Edge browser. Waiting for 6.10 final to hopefully resolve this one
  17. ipvlan seems like an upgrade from macvlan anyways so I didn't even bother with testing my config on macvlan with 6.10.0-rc1. I went right to ipvlan as soon as the update was complete and now at 45 hours with no issue. Generally couldn't make it half that long in 6.9.x when I tried with my configuration before locking up.
  18. 17 hours on 6.10.0-rc1 with the switch over to ipvlan and no panics yet....knock on wood If I make it to 48 hours I will be convinced.
  19. Hello and thanks! USB Creator tool doesn't yet list 6.10.0-rc1 under the "Next" selection area. Are you recommending I use it to reinstall 6.8.3 on to the flash drive, copy over my backup again to the drive and then attempt the upgrade via gui to 6.10.0-rc1 again?
  20. I get this after trying "make_bootable.bat" as admin on windows machine. Then trying to boot unraid 6.10.0-rc1
  21. Tried updating from 6.83 to 6.10.0-RC1 and just simply got a failed to boot message. Was interested in testing out ipvlan to see if it solves my issues with macvlan driver in the 6.9.x releases. Is there anything I can grab off the flash drive to help diagnose the failure to boot after update?
  22. Thank you @HyperV!! After a couple days of frustration thinking it's always DNS I found your fix here with the good ol' search button and it worked like a champ. I cannot upgrade to 6.9.2 because of macvlan crashes in the docker implementation
  23. I waited until 6.9.2 to make the jump from 6.8.3. Unfortunately I ran into this problem as well. 6.8.3 was stable for months for me so trying to downgrade now to keep the wife and kids happy. Will follow along to see when this one gets sorted out. Cheers to those putting in the effort to get this fixed! Thanks