patchrules2000

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by patchrules2000

  1. Thanks for the quick reply Jonathanm! Disabled the empty vm i created in testing trying to work out this problem and they are now both showing up! Will upgrade to RC7 once all users are off the server a bit later on for a better fix than just leaving 1 vm not running.
  2. Hi all, Found a very odd error on my system, Hopefully its simple fix, Any insight into what is causing this is greatly appreciated So today i went to VNC into my virtual machine through the WEBUI and discovered that my VM has disapeared from dashboard and vm tab completely. So i then tried to recreate the config for it using the same Vdisk to discover that i cant because the file is still in use. Then tried to login to it via VNC by ip:port from a vnc viewer not through unraid web and it pops right up! So somehow the VM is still running but i just cant see it in unraid. Also if i restart the server it boots up and launches perfectly fine and can still VNC into the VM once it boots up. So i tried to create a new VM with a new Vdisk and it created without error, but it still didnt show up in webui after creation. So it seems that somehow the WEBUI is just not picking up that the VM's exists even though unraid backend knows its there and fires it up on reboot. Both the original VM and the newly created VM show up with "virsh list --all" in console. Thanks for your help in advance Regards, Patrick
  3. Thanks for the quick reply johnnie, Was about to get some data to open this back up an noticed rc6 has been released and thought might as well try it first. Turns out RC6 fixes something that previous rc's did not for me and now everything seems to be running smoothly. Pushing 200+MB read + write, Cpu encoding at 90% usage while deliverying 3+ media streams and not a single slow down or high IO/Wait issue to report. Suffice to say im very pleased as long as this dosent resurface as an issue between RC6 + Final release. Keep the magic unraid sauce flowing please!
  4. I, and other users, seem to still be having this same performace issue in latest rc's. Why has this been closed off as fixed? or is it raised in another issue that i havent found?
  5. Haha, Happens to the best of us Update worked a treat! thanks for your help. I can now rest easy at night knowing im getting full throughput from my drives, all 4.85GB/s of it!!!! Also good to know i have some theoretical headroom of about 3GB/s for future expansion through expanders if i ever find a case to support that many drives. Thank you for supporting such a useful tool.
  6. Hi Jbartlett, Thanks for working on the update. Unfortunately i updated to latest container and still getting the it showing version 2.2 in top left corner and still getting the same error. Even tried completely deleting docker image and all local files and reinstalling just incase it was cached wrong somehow and still not getting version 2.3 or removal of the error. Does it take a while for docker to propigate or is there something i need to do to force it to 2.3 (such as a beta branch or something)? Also just FYI SAS controllers (with expanders) can be connected to upto 256 drives! However that would be quite a large timeout value Just my thoughts on the issue as well for a future more robust fix. Would it be possible to change this value to be set as a variable not a constant to e.g "30seconds * #numofdrives" to fix any compatibility issues with different configs ? Alternatively a more reasonable general limit if that isnt possible would be the equivilent time needed for 32 drives as Unraids main array limit is 30 drives and a standard config usualy also includes 2 cache drives so 30+2 = 32 drives maximum on a "normal" unraid system. This would equate to a runtime of about 8 min's at your 15seconds per drive statement, so perhaps a 10 min timeout would be appropriate for this scenario? (Unraid techincaly does support 54 drives, 30 array, 24 cache. But that would be an extremely edge case and most likely never occur as the controller bandwidth bottlenecking would be something crazy if 30 array and 24 cache drives where being hit at once by users and all where on the same controller) Thanks for your help so far mate. Regards, Patrick
  7. Hi all, hopefully you will be able to help. Just upgrade my unraid rig from two sas controllers to a single 24 port controller. I am trying to complete a controller benchmark but it keeps failing about 2/3 of the way through with a timeout error. I suspect it is because of a software constraint set by Diskspeed for the controller test as all drives pass testing with good speed and no other errors when run individualy for each drive. It fails around drive 16 of 24 every run through. Is there a config value i could change to increase this timeout limit (if that is the actual problem) to aproximately double to enable a full run through of the 24 drives. Thanks for your help :)
  8. i had NCQ disabled on newest version and 6.6.7 when seeing the huge performance improvement I have since enabled it on 6.6.7 and havent noticed any tangeable difference besides everything just working again after the downgrade in version. Que Depth was default to 1 after enabling in menu, i changed it to 31 on all drives. Might be noticible in a benchmark but at the moment no obvious change (although server is currently not under a very large load) Dont really feel like upgrading again to a broken version just to test NCQ on 6.7.* but if it is a turning point in getting the issue fixed i can give it a try. Hope this info has been helpfull
  9. Just wanted to add my 2c, Same exact issue, most obvious when NZBGET is processing/extracting a download resulting in a bunch of read+writes at once. completely freezes up any other disk i/o with upwards of 50% cpu time IOWAIT even if just trying to do a simple file read from a disk not in use at all by NZBGET or other process's Reverting unraid version has eliminated the issue so something between 6.6.7 and current is definately the cause. Hopefully we get a fix soon
  10. Hmm that could be related, it is copying from a reiserfs drive to a new xfs drive It is copying from drive 1 -> 9 i will try xfs -> xfs when this transfer finishes in the morning. Drive list https://www.dropbox.com/s/flq9haiz6v8vusd/drivelist.jpg?dl=0
  11. So i have a bit of a weird issue that hopefully someone can shine some light on. So first off just a bit of background on the performance of my system that i normally see. All my drives when accessed individually can sustain 150MB+ with no issues, parity sync i get an average of about 140MB/s, Preclearing drives i get upto 200MB/s that then steadily declines as it gets to sectors closer to middle of the spindle which says to me that i/o has no issue and it is maxing out the drives physical max speed, and finally on reads + writes over the network i can completely saturate my gigabit network at 100MB/s. so after all that you can probably see why i find it strange that when doing a file transfer between two drives internally on server using MC command through ssh console that i start seeing hiccup’s and spikey transfer speeds, it will transfer at what seems like is a reasonable full speed of 100-150MB/s for about 10 seconds then completely drop off to almost no speed at all for another 10 then repeat cycle over and over again. Im not even that concerned with getting a really high transfer speed, what worries me is the intermittent performance that to me makes no sense. i have a few screenshots of transfer speeds reported by unraid bellow and can provide whatever syslog etc that might help shed light on the issue. Internal Transfer speeds https://www.dropbox.com/s/7dokkhewc1gc3fn/InternalTransfer.jpg?dl=0 Preclear writing for reference https://www.dropbox.com/s/7mvj6330ot3ouge/preclear.jpg?dl=0