Execut1ve

Members
  • Posts

    31
  • Joined

  • Last visited

Posts posted by Execut1ve

  1. Hello all, recently my Plex server has not been working. I can't access the webui and the container's status in Docker shows "unhealthy."

     

    To my knowledge I haven't changed anything in the container's settings or made any recent changes to my server. I have been tinkering with router / DNS stuff, but it seems like that shouldn't be related.

     

    Can anyone point me in the right direction? Screenshot and diagnostics attached.

     

    Thanks!

    plex unhealthy.jpg

    newunraid-diagnostics-20220522-1911.zip

  2. It seems it was too good to be true... pausing behavior has returned.

     

    Based on the syslog, it does seem to be related to an issue with one of the graphics cards:

    Oct 15 10:12:11 RemoteUnraid kernel: pcieport 0000:00:07.0: AER: Multiple Uncorrected (Fatal) error received: 0000:00:00.0 Oct 15 10:12:11 RemoteUnraid kernel: vfio-pci 0000:06:00.0: AER: PCIe Bus Error: severity=Uncorrected (Fatal), type=Inaccessible, (Unregistered Agent ID) Oct 15 10:12:12 RemoteUnraid kernel: pcieport 0000:00:07.0: AER: Root Port link has been reset Oct 15 10:12:12 RemoteUnraid kernel: pcieport 0000:00:07.0: AER: device recovery successful

     

    I'm going to try reseating all the cards and risers and see if that helps any

  3. I actually got the VM to work without pausing by adding a USB keyboard and mouse and passing them through to the VM.

     

    I have no idea why that would work, but the VM did originally have those items. I had removed them when I physically relocated my server but now the VM is working after re-adding them.

     

    I typically access the VM via VNC. I usually don't have any issues with it, except occasionally the disappearing mouse cursor.

  4. After some informal experimentation, I'm not seeing much difference (if any) in my total PPD between allocating the container 4 cores (with nothing on the HTs) VS allocating 2 cores with their 2 HTs.

     

    For reference I'm folding on 4 GPUs: 3 of the Zotac 1060 mining variants and 1 GTX960. They are connected to the mainboard via powered PCIE riser cables. Two of them are in x8 slots and two are in x4 slots. All the PCIE slots are Gen 2. The computer is a PowerEdge R710 server with dual Xeon X5690 processors. I'm averaging 800k-1M total PPD, with each card sitting in the 200k-250k range. I don't notice any substantial difference between the cards on the x4 slots vs the x8 slots.

     

    Can anyone else with a hyperthreaded CPU offer any observations?

  5. 12 hours ago, testdasi said:

    That's normal. The CPU thread is used to load data to and from the GPU and it's a substantial amount of data to load.

    That's why it's important to ensure you pin the right cores for the F@H docker to prevent lag to the important stuff.

    Hm, I wonder if I'd notice a hit to folding performance if I assigned the container 2 cores and 2 hyperthreads instead of 4 cores? Time for some experimentation!

  6. I've been using the docker container to fold with 4 GPUs / no CPU. Everything seem to be working well but I've noticed that the container seems to use a CPU core for each GPU slot, and each CPU core it uses is pinned at 100% utilization. Is anyone else getting similar behavior?

     

    I realize the GPUs have to be fed data to fold, but it seems like that shouldn't take up 100% of a core. The CPUs are Xeon x5690s, so not exactly new, but not slouches either. Can anyone offer any thoughts? Am I misunderstanding something in how all this works?

  7. In case anyone is curious how this turned out, I did some research into the Intel 5520 chipset which is on the R710. The architecture seems to use 3 of what Intel calls Quick Path Interconnects to connect the 2 processors together and each processor to the I/O hub. So unlike some other architectures (such as the one in Gridrunner's tutorial) the PCIe slots seem to connect directly to both processors rather than just one or the other.

     

    jK1Q9Cv.png

  8. Hello all, I recently followed Gridrunner's excellent tutorial here on how to use the lstopo command to optimize PCI slot assignment, CPU core assignment, etc for best performance of VMs in a multi CPU setup.

     

    I am using a PowerEdge R710, which is a dual CPU board, with 2x Xeon X5690 installed. I was expecting to see output similar to the example in the video, though with 2 CPUs instead of 4, and I'd have to match up which PCI slot is connected to which CPU. Instead this is what I got: 

    zZ6ggnr.png

     

    There are a couple things about this that don't make sense to me:

    -why are the numbers of the CPU cores all wonky? They are each 6 cores (12 threads), I'd expect to see them numbered 0-5 and 6-11 or something like that

    -why do all the devices seem to be connected to the same CPU? My understanding was each CPU has its own memory and some (but not all) of the PCI lanes, is this not correct?

    -what do the numbers (eg 1.0, 2.0, 0.5) on some of the devices mean?

  9. I like a lot of cache because I have a couple daily driver type VMs that live entirely on cache, including a hundred G or so apiece for games, frequently used programs etc. Each VM also has storage space on the array for bulk items that don't need to be fast.

     

    The general thinking is your cache should be large enough to comfortably hold an entire day's worth of new writes to the array. Then, when the Mover is run, it all gets written to the array and you start over fresh the next day. My understanding is if the cache fills up and you need to do still more writes, you'll write directly to the array and lose any of the speed gains you have from the faster cache.

    So you should determine your cache size based on how much data you expect to write in a day to your array.

  10. I don't see why what you've proposed won't work. In theory, alls you really need is the 3.5" backplane, and you could frankenstein together an enclosure for it and the drives if you don't mind having it sit on top like where you have that drive now. Or you could pick up a premade 3.5" drive enclosure and run a bunch of SATA / power cables from the 2.5" backplane out to it.

     

    I'd recommend thinking hard about using SSDs for your cache - I can't imagine the price point being more hateful than 2.5" 10k rpm server drives, and performance will be better.

     

    What's your eventual intended usage for this beast - VMs to play around with, any heavy lifting as far as CPU / video, pure storage, media server, something else?

  11. I had to go through a few different versions of a couple components before I could get everything to play nice together - I found that the chassis itself seems to be the most expensive component, crazily enough.

     

    If you are already happy with the other hardware, ie cpus, ram etc, you might consider picking up a bare bones 3.5" R710 and migrating all your stuff into it, then try to sell what's leftover unused. In the long run, once cost of drives is factored in, this will probably be cheaper.

     

    If you are looking to do something weird like put high powered video cards in your setup, or use one of the big boy cpus that are compatible with the R710, there are additional things you'll want to check before you commit

  12. Yeah I considered that when making my purchase - the 3.5" allows you to use spare drives from regular PCs too, whereas 2.5" spinning drives are almost all server grade hardware (or shitty laptop drives) that most folks don't have laying around the house.

     

    It may be worth switching to the 3.5" - I'm sure you could sell your current server on ebay or something pretty easily and recoup some of your cost

  13. I am currently using 2 of these, having no trouble whatsoever with them: https://www.newegg.com/Product/Product.aspx?Item=N82E16812117662

     

    You do have to turn a tight right angle with them to get them plugged into the backplane - if you don't mind doing more research and probably spending more, I'm sure you can find a version of this cable that has a 90 degree connector

     

    I'm happy to answer any questions about my setup that you might have, though I can't provide a ton of more general advice beyond what worked for me and what I tried that didn't work. I'm currently using 3 2T drives in the array, and 3 SSDs for the cache. My next upgrade will probably be larger SSDs

  14. I did need to buy longer cables, as the ports on the H200 are in a different spot that can't really be reached with the stock cables (even using the dedicated storage slot) barring really janky ugly cable routing.

     

    I am passing through some video cards and peripherals, though not entire drives. I let Unraid handle all the drives.