Jump to content

patchrules2000

Members
  • Posts

    11
  • Joined

  • Last visited

Posts posted by patchrules2000

  1. Hi all,

     

    Found a very odd error on my system, Hopefully its simple fix, Any insight into what is causing this is greatly appreciated :) 

     

    So today i went to VNC into my virtual machine through the WEBUI and discovered that my VM has disapeared from dashboard and vm tab completely.

    So i then tried to recreate the config for it using the same Vdisk to discover that i cant because the file is still in use.

    Then tried to login to it via VNC by ip:port from a vnc viewer not through unraid web and it pops right up! 

    So somehow the VM is still running but i just cant see it in unraid. 

    Also if i restart the server it boots up and launches perfectly fine and can still VNC into the VM once it boots up.

    So i tried to create a new VM with a new Vdisk and it created without error, but it still didnt show up in webui after creation.

    So it seems that somehow the WEBUI is just not picking up that the VM's exists even though unraid backend knows its there and fires it up on reboot.

    Both the original VM and the newly created VM show up with "virsh list --all" in console.

     

    Thanks for your help in advance :)

     

    Regards,

    Patrick

    VM 2.jpg

    VM 1.jpg

  2. 5 hours ago, jbartlett said:

    I pushed an update after pulling up BeyondCompare to do a code sync but didn't actually sync the code. That'll teach me to try to work on bugs after being up for nearly 24 hours after a whirlwind vacation of New York City & Long Island! Try again, I show 2.3 on my main rig.

     

    Instead of just increasing the timeout to accommodate your rig, I just bumped it up to 9999 seconds for the controller benchmark (2.78 hours).

    Haha, Happens to the best of us :)

     

    Update worked a treat! thanks for your help.

     

    I can now rest easy at night knowing im getting full throughput from my drives, all 4.85GB/s of it!!!!

     

    Also good to know i have some theoretical headroom of about 3GB/s for future expansion through expanders if i ever find a case to support that many drives.

    Thank you for supporting such a useful tool.

     

    image.thumb.png.037ee50884eef27bb34c1cfcf4af9962.png

     

    • Like 1
  3. 5 hours ago, jbartlett said:

     

    I didn't know there was a 24 port controller. I have the timeout set to five minutes to try to nip any rogue processes in the bud but at 24 drives testing for 15 seconds each, it would take at least six minutes to do a complete pass.

     

    I've updated the default timeout to ten minutes. I just pushed version 2.3 - please update the Docker and try again.

    Hi Jbartlett,

     

    Thanks for working on the update.

    Unfortunately i updated to latest container and still getting the it showing version 2.2 in top left corner and still getting the same error.

    Even tried completely deleting docker image and all local files and reinstalling just incase it was cached wrong somehow and still not getting version 2.3 or removal of the error.

     

    Does it take a while for docker to propigate or is there something i need to do to force it to 2.3 (such as a beta branch or something)?

     

    Also just FYI SAS controllers (with expanders) can be connected to upto 256 drives!

    However that would be quite a large timeout value ;)

     

    Just my thoughts on the issue as well for a future more robust fix.

    Would it be possible to change this value to be set as a variable not a constant to e.g "30seconds * #numofdrives" to fix any compatibility issues with different configs ?

     

    Alternatively a more reasonable general limit if that isnt possible would be the equivilent time needed for 32 drives as Unraids main array limit is 30 drives and a standard config usualy also includes 2 cache drives so 30+2 = 32 drives maximum on a "normal" unraid system.

    This would equate to a runtime of about 8 min's at your 15seconds per drive statement, so perhaps a 10 min timeout would be appropriate for this scenario?

     

    (Unraid techincaly does support 54 drives, 30 array, 24 cache. But that would be an extremely edge case and most likely never occur as the controller bandwidth bottlenecking would be something crazy if 30 array and 24 cache drives where being hit at once by users and all where on the same controller)

     

    Thanks for your help so far mate.

     

    Regards,

    Patrick

  4. Hi all, hopefully you will be able to help.

     

    Just upgrade my unraid rig from two sas controllers to a single 24 port controller.

    I am trying to complete a controller benchmark but it keeps failing about 2/3 of the way through with a timeout error.

    I suspect it is because of a software constraint set by Diskspeed for the controller test as all drives pass testing with good speed and no other errors when run individualy for each drive. It fails around drive 16 of 24 every run through.

     

    Is there a config value i could change to increase this timeout limit (if that is the actual problem) to aproximately double to enable a full run through of the 24 drives.

     

    Thanks for your help :)

     

    image.thumb.png.cee9640ead0de7ddb5024a59acdd8cb1.png

×
×
  • Create New...