theunraidhomeuser

Members
  • Posts

    66
  • Joined

  • Last visited

Everything posted by theunraidhomeuser

  1. I added this section from another forum post that I found: <numatune> <memory mode='preferred' nodeset='0'/> </numatune> I've also changed the CPU pinning to the following... let's see..
  2. Hi there and thanks for your help. I'll try and remove memfs and shared section as a first try. I'm not an expert on numa nodes so not entirely sure what you mean by: Below is the CPU settings I have. as this is a VM, I don't know how you woulr restrict memory allocation to memory allocated to the CPU, if you could let me know what I need to do, I'd happily try that. I initally had 256GB assigned but then thought the machine might be running out of memory due to some leak but then that turned out not to be the cause. Thanks! and this is the VM setup but you will have seen that from the diags.
  3. I think I found the issue, created a separate thread for it so easier to find..
  4. HI everyone, I get this on my ubuntu VM in Unraid, attaching the urnaid diags. Really hard to google this, I couldn't find any resolution.. These are the last view lines of the VM log file. The VM crashes after around 8-10 hours being on. I have a lot of hard drives attached via 2 USB PCI-E passthrough cards, but it used to work reliably without crashing, not sure what's now causing this.. -device '{"driver":"qxl-vga","id":"video0","max_outputs":1,"ram_size":67108864,"vram_size":67108864,"vram64_size_mb":0,"vgamem_mb":16,"bus":"pcie.0","addr":"0x1"}' \ -device '{"driver":"vfio-pci","host":"0000:0e:00.0","id":"hostdev0","bus":"pci.5","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:04:00.0","id":"hostdev1","bus":"pci.6","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:05:00.0","id":"hostdev2","bus":"pci.7","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:06:00.0","id":"hostdev3","bus":"pci.8","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:07:00.0","id":"hostdev4","bus":"pci.9","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:0e:00.2","id":"hostdev5","bus":"pci.10","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:0e:00.3","id":"hostdev6","bus":"pci.11","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:10:00.0","id":"hostdev7","bus":"pci.12","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:88:00.0","id":"hostdev8","bus":"pci.13","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:89:00.0","id":"hostdev9","bus":"pci.14","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:8a:00.0","id":"hostdev10","bus":"pci.15","addr":"0x0"}' \ -device '{"driver":"vfio-pci","host":"0000:8b:00.0","id":"hostdev11","bus":"pci.16","addr":"0x0"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on char device redirected to /dev/pts/0 (label charserial0) qxl_send_events: spice-server bug: guest stopped, ignoring failed to set up stack guard page: Cannot allocate memory 2023-12-23 01:13:59.993+0000: shutting down, reason=crashed Thanks for any suggestions! super-diagnostics-20231223-0917.zip
  5. thanks for the suggestion, I might try that. Regarding btrfs errors: no, but I think it’s linked to the drives I’m passing through virtiofs, as I created a pool in unraid to pass through, and it’s formatted with btrfs. I’m thinking of undoing that and passing through the 2 sata drives with UAD to the VM directly, and then potentially creating a raid0 as I want to leverage higher speed. keep you posted! Thanks!
  6. oh yes, I am using it. Was using the unraid 9p thing but it's so damn slow to read-write that I switched to virtiofs which seems a lot quicker. Do you suspect this could be a cause? I always used the 9P mode in the past but had much less need for good throughput... I might try out 9p if you think that could be a reason but would love to understand why that would be the case (and in that case, maybe try and address the root cause).
  7. Hi and thanks for your help. I just updated to 6.12.6 last night and changed the QEMU version to 7.2. Worth mentioning, I never had this issue before, I was running UNRAID with an ubuntu VM for over 2 years, rock-solid. Recently, I changed the scope of the VM, adding to PCI-E USB 3.0 cards and a passthrough GPU. Since, then, I feel that I'm "overwhelming" the VM somehow, resulting in it to freeze. Log files don't really reveal what's going on (or I can't see it). Attached the latest diags. I'm also getting a "Machine error" in the Fix Common Problems plugin all of a sudden, not sure where that is coming from and don't know where to find the mcelog output... Appreciate your thoughts to trouble-shoot, am a bit lost TBH... super-diagnostics-20231217-1237.zip
  8. Hi everyone, I'm hoping for some help, as I can't seem to figure out why my Ubuntu VM constantly crashed. I'm speculating overheating of the NVMe drives but I can't really trace it down. I've got a VM that has 2 USB 3.0 PCIe cards passed through to it and a NVIDIA RTX gpu. All seemed to work until recently... Will try to upgrade Unraid right now and see if that may help. The hardware I'm running is a HP Z840 Workstation with 512GB RAM. Cheers! super-diagnostics-20231216-1135.zip
  9. Thanks, I think I figured it out. PCI-E USB 3.0 cards were simply maxed out... Somebody else explained the endpoint logic to me which seems to work out!
  10. Hey, sorry, I posted on your github an issue that's probably not a code issue, just one for me. Hence here again: Hi there, I'm trying to find documentation on this. I recently changed my setup, adding a lot of drives via USB (Syba 8 Bay Enclosures). These drives are not meant to be part of a pool, I just wnat to mount them. I believe previously, they may have been part of a mergerfs filesystem but normally that doesn't cause any issues as mergerfs is non-destructive to the partittion tables. Do you have an idea how I can mount these drives? I can' create an UNRAID pool, they are more than 30 drives (and I didn't have them in an UNRAID pool before).... Cheers!
  11. sure thing, here comes! All 72 drives connected, however not all showing up. Tried a mix of powered USB 3.2 hubs, unpowered USB 2.0 hubs, and daisychaining the hubs as I read somewhere that could help... Appreciate your views or feedback from anyone really who has more than 30 USB hard drives attached to their NAS. super-diagnostics-20231122-1900.zip
  12. Thanks for the quick reply. I concur, I had that issue with Terramaster enclosures and also others. However with Syba, they pass through each drive "as is", i.e. I can even see the serial number of each single drive in Unraid... So I don't think that's the problem...
  13. So I go to "replace key"... I then get a pop up with my unraid server, the pop up then closes but there is no button with "Finish Key Replacement" What am I missing? I copied the Pro.key from the other USB to the new USB... Was following this guide: Cheers!
  14. Hi there, I have 9 SYBA 8-bay hard drive docks with HDDs and want to attach them to Unraid via USB 3.0. I have a PCI-E Card with USB 3.0 ports however when I attach the 7 SYBA bays, only 2, i.e. 16 drives get recognized. When I unplug them and replug them 2 at a time, all drives are recognized. Conscious of USB 3 bandwidth limitations, I was still hoping the drives would show up and work, with all this PR around 127 devices etc.. Doesn't seem to work. Should I try with USB 2.0 as I know that was less restrictive but of course much slower? A bit clueless, to be honest... Appreciate anyone's ideas.
  15. Hi all, suddenly, since a few weeks, I can't use the UNRAID GUI user/share functionality with my network shares anymore for SMB. All my shares bring up an error, both on Windows 11 and Ubuntu 22.04 clients. At first, I thought an UNRAID upgrade might have disabled the older SMB versions potentially used by Windows but then I realized my Ubuntu machine was equally unable to connect to the shares. Things I've done: rebooted the machine rebuilt parity on my hard drives trailed the syslog and nothing comes up deleted credentials from Windows Credential Manager and rebooted Worth noticing, when I try to access the NAS through the domain name (super-nas.local), I don't even get to see the folder tree, it immediately prompts for user/pwd (and brings up the first screenshot thereafter) Attached my diags, appreciate your time to try and assist! Example error message on Windows 11 client: super-nas.local-diagnostics-20231022-1121.zip
  16. Hi everyone, I have three Unraid PRO licenses, all linked to Connect. 1 is connected just fine, and 2 are having issues. The two that are having issues are in a different physical location than the 1 that is working. I am getting the following messages on both machines: Unraid API Error CLOUD: Connecting - refresh the page for an updated status. Connect Support Go to Connect Settings Manage Unraid.net Account Sign Out of Unraid.net What I did: disconnected and reconnected to UNRAID connect logged out and back in from the UNRAID account in question ensured that both UNRAID installs are on the latest version 6.12.3 ensured that both UNRAID installs can connect to the internet updated all plugins What else could be the issue? Would the diagnostics file be of any use here as this seems more like a connectivity issue? Cheers anyone, appreciate your help!
  17. Hi there, those of you that know what I'm talking about in the subject, i.e. zero trust tunnel via cloudflare, woudl you reckon it's a safe way to access unraid in lieu of a VPN? I have unraid.mydomain.com routed to my unraid machine via cloudflare zero trust (using the cloudflaredtunnel docker plugin). Appreciate your thoughts, ideally, I'd like to put a .htaccess and .htpasswd before the unraid login screen for another layer of protection, but I can't seem to figure that out.... Cheers!
  18. Hi there, I have a weird issue that I haven't encountered before. Prior to playing around too much on the command line, I thought I'd ask the community. Diags attached. Experienced Behaviour: when creating a new share in the GUI of 6.12.3, the new share does NOT get created, whether in the GUI nor in the file system. No error message is received, it just refreshes the share page as it normally would. when going into the Shares section, the share is not there. Expected Behaviour: when creating a share, it gets created Things I did: tried different browsers (I use brave by default but know it has some aggressive blocking techniques). So I tried in Chrome, no extensions enabled. Same result. just launched a parity check to avoid any issues. this will take 2 days Things I noticed: I seem to have an issue with one of my drives (see screenshot). Just noticed. In the GUI, all appears o.k.... Another reason to run my parity check. In the GUI, all drives are reported as "healthy", however in MC, I noticed the following: Does anyone have any ideas? Should I try to remound the disk 2 via a mount -a? Thanks everyone super-nas-diagnostics-20230723-1015.zip
  19. I don't know how to do that, I enabled syslog to another NAS device and that's the output I got... Meanwhile I've been playing with disabled C-States and am monitoring. For a few days things are good. Also did a BIOS update of my MSI board (though none of the changelog items suggested any fixes in this direction), so let's see. Thanks again for your ongoing support!
  20. hi there, none of this worked unfortunately... any other idea, is this not something that should be fixed by the unraid devs? Should I file a bug report?