theunraidhomeuser

Members
  • Posts

    66
  • Joined

  • Last visited

Posts posted by theunraidhomeuser

  1. 3 hours ago, SimonF said:

    You have 382G allocated is that spanning the NUMA nodes? Have you tried restricting memory allocation to memory allocated to the physical CPU that the VM is running on or are you spanning CPUs?

    I added this section from another forum post that I found:


     

      <numatune>
        <memory mode='preferred' nodeset='0'/>
      </numatune>
    

     

    I've also changed the CPU pinning to the following... let's see..

     

    image.png.d01379c6bbf8797b3557089b7ab4870f.png


  2. Hi there and thanks for your help. I'll try and remove memfs and shared section as a first try.

    I'm not an expert on numa nodes so not entirely sure what you mean by:
     

    Quote

     

    You have 382G allocated is that spanning the NUMA nodes? Have you tried restricting memory allocation to memory allocated to the physical CPU that the VM is running on or are you spanning CPUs?


     

     

    Below is the CPU settings I have. as this is a VM, I don't know how you woulr restrict memory allocation to memory allocated to the CPU, if you could let me know what I need to do, I'd happily try that. I initally had 256GB assigned but then thought the machine might be running out of memory due to some leak but then that turned out not to be the cause.

     

    Thanks!

     

    image.thumb.png.7af0283db092ffe92a73028989492da2.png

     

    and this is the VM setup but you will have seen that from the diags.

     

    image.thumb.png.3aeec4faf291971aa8f20c4fe5a5157a.png

  3. HI everyone, I get this on my ubuntu VM in Unraid, attaching the urnaid diags. Really hard to google this, I couldn't find any resolution.. These are the last view lines of the VM log file. The VM crashes after around 8-10 hours being on. I have a lot of hard drives attached via 2 USB PCI-E passthrough cards, but it used to work reliably without crashing, not sure what's now causing this..

     

    -device '{"driver":"qxl-vga","id":"video0","max_outputs":1,"ram_size":67108864,"vram_size":67108864,"vram64_size_mb":0,"vgamem_mb":16,"bus":"pcie.0","addr":"0x1"}' \
    -device '{"driver":"vfio-pci","host":"0000:0e:00.0","id":"hostdev0","bus":"pci.5","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:04:00.0","id":"hostdev1","bus":"pci.6","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:05:00.0","id":"hostdev2","bus":"pci.7","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:06:00.0","id":"hostdev3","bus":"pci.8","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:07:00.0","id":"hostdev4","bus":"pci.9","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:0e:00.2","id":"hostdev5","bus":"pci.10","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:0e:00.3","id":"hostdev6","bus":"pci.11","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:10:00.0","id":"hostdev7","bus":"pci.12","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:88:00.0","id":"hostdev8","bus":"pci.13","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:89:00.0","id":"hostdev9","bus":"pci.14","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:8a:00.0","id":"hostdev10","bus":"pci.15","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:8b:00.0","id":"hostdev11","bus":"pci.16","addr":"0x0"}' \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    char device redirected to /dev/pts/0 (label charserial0)
    qxl_send_events: spice-server bug: guest stopped, ignoring
    failed to set up stack guard page: Cannot allocate memory
    2023-12-23 01:13:59.993+0000: shutting down, reason=crashed

     

    Thanks for any suggestions!

     

    super-diagnostics-20231223-0917.zip

  4. 43 minutes ago, SimonF said:

    There is a newer version of the virtiofsd which has to be used for QEMU 8 onwards so will be in the next release Unraid release.

     

    You can download the rust version from here https://gitlab.com/virtio-fs/virtiofsd/-/releases/v1.8.0

     

    and copy to here 

     

    root@computenode:~# which virtiofsd
    /usr/bin/virtiofsd
    root@computenode:~# 

     

     

    Note it will not survive a host reboot.

     

    There are lots of btrfs errors on your system also. Are you overclocking cpu or memory?


    thanks for the suggestion, I might try that. Regarding btrfs errors: no, but I think it’s linked to the drives I’m passing through virtiofs, as I created a pool in unraid to pass through, and it’s formatted with btrfs. I’m thinking of undoing that and passing through the 2 sata drives with UAD to the VM directly, and then potentially creating a raid0 as I want to leverage higher speed.

     

    keep you posted! Thanks!

  5. 32 minutes ago, SimonF said:

    If you are not using virtiofs

     

    oh yes, I am using it. Was using the unraid 9p thing but it's so damn slow to read-write that I switched to virtiofs which seems a lot quicker. Do you suspect this could be a cause? I always used the 9P mode in the past but had much less need for good throughput... I might try out 9p if you think that could be a reason but would love to understand why that would be the case (and in that case, maybe try and address the root cause).

  6. 17 minutes ago, SimonF said:

    If you are not using virtiofs I would remove memfd and shared  lines from the XML or update to 6.12.6 which has a newer QEMU version(7.2). There is an issue with VMs locking up normally Windows if this is enabled on QEMU 7.1

     

    <memoryBacking>

    <nosharepages/>

    <source type="memfd"/>

    <access mode="shared"/>

    </memoryBacking>

     

    Hi and thanks for your help. I just updated to 6.12.6 last night and changed the QEMU version to 7.2.

    Worth mentioning, I never had this issue before, I was running UNRAID with an ubuntu VM for over 2 years, rock-solid. Recently, I changed the scope of the VM, adding to PCI-E USB 3.0 cards and a passthrough GPU. Since, then, I feel that I'm "overwhelming" the VM somehow, resulting in it to freeze. Log files don't really reveal what's going on  (or I can't see it).

     

    Attached the latest diags.

    I'm also getting a "Machine error" in the Fix Common Problems plugin all of a sudden, not sure where that is coming from and don't know where to find the mcelog output...

     

    image.thumb.png.d31301b04c1eb922af9f091ad7a4aa1b.png

     

    Appreciate your thoughts to trouble-shoot, am a bit lost TBH...

     

    super-diagnostics-20231217-1237.zip

  7. Hi everyone, I'm hoping for some help, as I can't seem to figure out why my Ubuntu VM constantly crashed. I'm speculating overheating of the NVMe drives but I can't really trace it down. I've got a VM that has 2 USB 3.0 PCIe cards passed through to it and a NVIDIA RTX gpu. All seemed to work until recently...

     

    Will try to upgrade Unraid right now and see if that may help. The hardware I'm running is a HP Z840 Workstation with 512GB RAM.

     

    Cheers!

    super-diagnostics-20231216-1135.zip

  8. Hey, sorry, I posted on your github an issue that's probably not a code issue, just one for me. Hence here again:

     

    Hi there, I'm trying to find documentation on this. I recently changed my setup, adding a lot of drives via USB (Syba 8 Bay Enclosures). These drives are not meant to be part of a pool, I just wnat to mount them. I believe previously, they may have been part of a mergerfs filesystem but normally that doesn't cause any issues as mergerfs is non-destructive to the partittion tables.

    Do you have an idea how I can mount these drives? I can' create an UNRAID pool, they are more than 30 drives (and I didn't have them in an UNRAID pool before)....

    Cheers!

  9. Hi there, I have 9 SYBA 8-bay hard drive docks with HDDs and want to attach them to Unraid via USB 3.0. I have a PCI-E Card with USB 3.0 ports however when I attach the 7 SYBA bays, only 2, i.e. 16 drives get recognized. When I unplug them and replug them 2 at a time, all drives are recognized. Conscious of USB 3 bandwidth limitations, I was still hoping the drives would show up and work, with all this PR around 127 devices etc..

     

    Doesn't seem to work. Should I try with USB 2.0 as I know that was less restrictive but of course much slower? 

     

    A bit clueless, to be honest... Appreciate anyone's ideas.

  10. Hi all,

     

    suddenly, since a few weeks, I can't use the UNRAID GUI user/share functionality with my network shares anymore for SMB. All my shares bring up an error, both on Windows 11 and Ubuntu 22.04 clients.

     

    At first, I thought an UNRAID upgrade might have disabled the older SMB versions potentially used by Windows but then I realized my Ubuntu machine was equally unable to connect to the shares.

     

    Things I've done:

     

    • rebooted the machine
    • rebuilt parity on my hard drives
    • trailed the syslog and nothing comes up
    • deleted credentials from Windows Credential Manager and rebooted

     

    Worth noticing, when I try to access the NAS through the domain name (super-nas.local), I don't even get to see the folder tree, it immediately prompts for user/pwd (and brings up the first screenshot thereafter)

     

    Attached my diags, appreciate your time to try and assist!

    Example error message on Windows 11 client:

     

    image.png.275ebae668f5901dfd6ae5a636e90310.png

     

     

    image.thumb.png.94264ab114330de42de6c11e6b8ed430.png

     

    image.thumb.png.7a4e03dae095b24a04afb55fede0c68f.png

     

    super-nas.local-diagnostics-20231022-1121.zip

  11. Hi everyone,

     

    I have three Unraid PRO licenses, all linked to Connect. 1 is connected just fine, and 2 are having issues. The two that are having issues are in a different physical location than the 1 that is working.

     

    I am getting the following messages on both machines:

     

    Unraid API Error

    CLOUD: Connecting - refresh the page for an updated status.

    Connect Support 

    Go to Connect 

    Settings 

    Manage Unraid.net Account 

    Sign Out of Unraid.net

     

    What I did:

    • disconnected and reconnected to UNRAID connect
    • logged out and back in from the UNRAID account in question
    • ensured that both UNRAID installs are on the latest version 6.12.3
    • ensured that both UNRAID installs can connect to the internet
    • updated all plugins

     

    What else could be the issue? Would the diagnostics file be of any use here as this seems more like a connectivity issue?

     

    Cheers anyone, appreciate your help!

     

     

  12. Hi there, those of you that know what I'm talking about in the subject, i.e. zero trust tunnel via cloudflare, woudl you reckon it's a safe way to access unraid in lieu of a VPN?

     

    I have unraid.mydomain.com routed to my unraid machine via cloudflare zero trust (using the cloudflaredtunnel docker plugin).

     

    Appreciate your thoughts, ideally, I'd like to put a .htaccess and .htpasswd before the unraid login screen for another layer of protection, but I can't seem to figure that out....

     

    Cheers!

  13. Hi there, I have a weird issue that I haven't encountered before. Prior to playing around too much on the command line, I thought I'd ask the community. Diags attached.

     

    Experienced Behaviour:

    • when creating a new share in the GUI of 6.12.3, the new share does NOT get created, whether in the GUI nor in the file system.
    • No error message is received, it just refreshes the share page as it normally would.
    • when going into the Shares section, the share is not there.

     

    Expected Behaviour:

    • when creating a share, it gets created

     

    Things I did:

    • tried different browsers (I use brave by default but know it has some aggressive blocking techniques). So I tried in Chrome, no extensions enabled. Same result.
    • just launched a parity check to avoid any issues. this will take 2 days

     

    Things I noticed:

    • I seem to have an issue with one of my drives (see screenshot). Just noticed. In the GUI, all appears o.k.... Another reason to run my parity check.
    • In the GUI, all drives are reported as "healthy", however in MC, I noticed the following:

     

    image.thumb.png.62b80386232ae3dd517b0d931b789482.png

     

    Does anyone have any ideas? Should I try to remound the disk 2 via a mount -a?

     

    Thanks everyone

     

     

    super-nas-diagnostics-20230723-1015.zip

  14. I don't know how to do that, I enabled syslog to another NAS device and that's the output I got...

    Meanwhile I've been playing with disabled C-States and am monitoring. For a few days things are good. Also did a BIOS update of my MSI board (though none of the changelog items suggested any fixes in this direction), so let's see.

     

    Thanks again for your ongoing support!