joshbgosh10592

Members
  • Posts

    81
  • Joined

  • Last visited

Everything posted by joshbgosh10592

  1. Those are my Proxmox nodes binding, which should be NFS. The nodes weren't happy I rebooted the NAS on them..
  2. Attached is the diag zip. However, it's now not throwing those errors. I figure it's just a matter of time though? I'm working on setting up a syslog server, but haven't had the time yet. nas-diagnostics-20200730-1144.zip
  3. I tried to gather it, but the page just hangs. I also don't have the CPU stats populated (all cores show 0%). I think this happened last time I manually deleted the syslog files. I use Chrome.
  4. I experienced my unRAID log file running completely full a few weeks ago, and I cleared it by deleting syslog* from /var/log/, but I didn't think twice about it. Now, It's full again, and I'm looking at it.. I'm getting spammed by these messages every second that I don't understand in the slightest: Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [crit] 22193#22193: ngx_slab_alloc() failed: no memory Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [error] 22193#22193: shpool alloc failed Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [error] 22193#22193: nchan: Out of shared memory while allocating message of size 6775. Increase nchan_max_reserved_memory. Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [error] 22193#22193: *14271051 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jul 21 04:40:20 NAS nginx: 2020/07/21 04:40:20 [error] 22193#22193: MEMSTORE:00: can't create shared message for channel /disks Anyone have any advice on where to even start?
  5. Did you ever figure this out? I have my IoT devices on one subnet while computers are on another. It's great for security, but I can't AirPlay because of it..
  6. Any idea how to make the mergerFS file to stay persistently in /bin/ after a reboot? I make the file in /mnt/user/share/name and copy it to /bin, but after a reboot, it's gone from /bin, but survived inside /mnt/user/share/name... Also, anyone have any idea on how to make it readable from an NFS Client (Proxmox)? noforget and use_ino doesn't allow it..
  7. I haven't really found any straight documentation on how to properly set up NFS permissions... I'm trying to have an entire subnet be able to have rw to a share, and then sprinkle in some other IPs. Right now, I'm working within the Unassigned Devices plugin, but will eventually expand this to normal shares. I've been able to get multiple single IPs to work properly, but I found there's a limit to how many characters. This is what I'm trying, but a device with the IP of 10.9.220.11 is unable to mount, while the 10.9.221.69 is. Both are Proxmox servers, and I'm trying to mount them the same way. 10.9.221.69(rw) 10.9.0.253(rw) 10.9.0.252(rw) 10.9.220.0/24(rw) What's the proper way?
  8. I also have the same issue with clicking "Apply" when either enabling or disabling VM manager. Did anyone ever submit a bug report for 6.8 for that?
  9. Oh hahaha, yea I can see how that would be confusing. And that works haha. I'll keep that in mind incase something else happens during my adventures of reconnecting everything to new subnets lol. Next up - Proxmox cluster nodes.. This'll be fun...
  10. Yea, I was scrolling through the network settings trying to figure out what I wanted to do with the extra NICs and as I was adjusting my hand, my pinky somehow hit save. I rebooted that box at least 6 times since the issue started. There was something up with the config. Thinking back, I believe there wasn't an entry for the default route at the very bottom. That would make perfect sense if that was the case, but I wonder why it wasn't generated by unRAID.. either way, I now have a backup copy of that file just in case.. lol
  11. FINALLY GOT IT!!! There must have been something screwed up with the unRAID network.conf. I replaced it with the stock one from the installer zip, rebooted, reconfigured bond0 with the static IP (after testing with DHCP assigned) and it's FINALLY back up. NO freaking clue what my rogue pinky did, but something was weird with the network.conf.
  12. I know this is pretty old, but what all did you need to do to get the Redfish version? Is it a firmware you install on the lights-out management card? I have the x9 and can't use the virtual console anymore because of the java version..
  13. I just wet through a home network re-org where I split everything into multiple smaller /24 networks with VLANs instead of one massive /16. I had unRAID online working properly after the migration, until I was looking at the unRAID networking settings and my pinky clicked save.. I have no idea what changed... Symptoms:Devices on the same VLAN as my NAS (vlan/network 220) can ping the NAS. No devices (that I know of) on other vlans/networks can ping it. No VLAN changes happened between my pinky mishap. My switch does support bonding mode 802.3ad, and devices on vlan 220 can ping it with these settings. There is no link isolation, and my switch is a UniFi layer 2 switch. The VLAN settings on those ports on the UniFi switch are native-VLAN 220, with tagged VLAN of 221 (virtual servers that will live on this and other hosts). I've also tried with no bonding, so it's not the bonding issue. unRAID can ping the gateway of its own subnet, but nothing outside it (including the internet), which totally makes me think it's a default gateway issue, but it's set correctly. Other devices on the same network can ping outside, so it has to be something with the NAS. I set it to DHCP, and it receive an IP address on the correct network (.85 instead of .50) but the symptoms stayed the same - other clients can't ping/connect to it, unless they're on the same network. However there are other devices on the same 220 network that all clients can ping.. The only thing that changed between when it was working and when it wasn't were unraid settings.. Any ideas? I also posted this to reddit in hope of getting a quick resolution, as quite a few Plex family members are now bored out of their minds: https://www.reddit.com/r/unRAID/comments/ejkuaz/unraid_cannot_bring_online_after_pinky_went_rogue/
  14. When I go to make the change in the <cpu> section, I receive an error saying, "XML error: Non-empty feature list specified without CPU model" My section is: <cpu> <topology sockets='1' cores='5' threads='1'/> <feature policy='require' name='vmx'/> </cpu>
  15. No worries, thank you! So, just to be sure, I'd add the kvm-intel.nested=1 in the Unraid OS section, so it's exactly as below? kernel /bzimage append initrd=/bzroot kvm-intel.nested=1 But then where do you go to edit the VM's CPU section? LIke, where are the config files for them? I'm assuming /etc/libvirt/qemu/VMName.xml, correct?
  16. Hello! I'm working on migrating my physical Plex server to a VM hosted inside a Proxmox cluster, with unRAID being the NAS for storage and and possibly a tertiary VM host when my unRAID bug report is solved. However, because of the amount of network traffic involving media streaming, VM migrating, and even just VMs living on the NAS, I've had my Proxmox cluster loose quorum at least twice, and reboot. So, it's time to segregate my network. My Proxmox nodes as well as my unRAID server all have 8 Gb NICs. What I'd like to do: Have the three Proxmox nodes (2 physical, 1 hosted on unRAID) have their own 172.16.0.0 network for cluster stuff (probably dual Gb NIC) I'll also be setting up a dumb switch for the ring1, secondary cluster network for when my switch does firmware upgrades. Bonus points if I can also use this switch for the rest of my requirements as a failover. Proxmox be able to store it's VMs on the NAS using 172.16.1.0 network, so VM storage is on a separate network (and thus separate NICs). Have the Plex VM be able to stream media from the NAS and be on it's own 192. network and be reachable no matter which physical host it lives on. Have all these connections mentioned, be direct connections. There will be another NIC or two for client side using LAG (I have a UniFi switch that supports this). I'm just limited on switch ports. I figured if I can make unRAID have a virtual switch (like Hyper-V), I'd be able to assign that virtual switch an IP address of 192.168.0.1, 172.16.0.1, or 172.16.1.1 (no gateway) for each bond, that would allow the Proxmox nodes to always feel like their going to the same location for data for HA, as well as Plex. is this possible? I haven't found any documentation on v-switch within unRAID besides a pdf that displays the different options.
  17. @jonp Just wondering if you were able to reproduce this issue in your lab or not. I know it's been crazy with the 6.8 version coming up, but I'm just curious.
  18. Thank you! I've submitted a bug report:
  19. As per the thread below, I'm submitting a bug report for the inability to host nested VMs. In my case, I have a Proxmox VM (PVE-Witness) running on unRAID. It's the third node of my Proxmox cluster. When I try to fire up a VM on PVE-Witness that was just running on PVE-1, I'm met with the error: TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS. As requested, I attempted a similar task on a newly created Ubuntu 18.4 VM (Ubuntu). When creating the VM in Ubuntu, I'm met with: "Warning: KVM is not available." nas-diagnostics-20191021-0320.zip
  20. If by that you did mean try Ubuntu, that fails with saying "Your CPU does NOT support KVM" Should the BIOS type matter? I'm concerned because all the Proxmox VMs use SeaBIOS, and I'm not see unRAID's Proxmox VM shows OVMF. I'd change it as a test, but it seems that you can't change it once the VM is created, and the Proxmox VM is the witness in a cluster (which means it's a PITA to reconfigure)
  21. I'm sorry, you mean to create a new VM running Ubuntu inside unRAID and attempting to build a nested VM inside that, right?
  22. Sorry to necro this thread, but I haven't found anything else anywhere that helps. I'm trying to do exactly this on 6.7.2 with a Proxmox VM. When I try to fire up a VM on this Proxmox VM, I receive an error saying that virtualization is configured but not enabled in the BIOS. When I append kvm_intel.nested=1 to the unraid os label, Libvirt fails to start when I tell the VM manager to start back up.
  23. I just looked at the logging for the first time in quite a while and I'm being flooded by errors, about 3 every second: Aug 27 23:31:24 NAS root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token I've seen in this thread that it was a security increase within unRAID's webUI, but that was resolved on 6.3 I believe. Unassigned Devices is fully updated. Any ideas?
  24. True, I thought it would show raid5 for the system (I set mconvert to raid1, so I was expecting that to show raid1). Thank you! Is there a calculation for that? I was expecting it to be "Free space, minus 1TB"
  25. I was actually just editing the quoted post, sorry about that. I think I got it! Thank you!! Now my next and hopefully final question.. Shouldn't System and Metadata say RAID5? The UI shows the sdg as having 3TB free, when it should have only 2. Here's what I ran, and the results of btrfs filesystemdf: root@NAS:~# btrfs balance start -dconvert=raid5 -mconvert=raid1 /mnt/disks/nas-ssd-pool/ Done, had to relocate 4 out of 4 chunks root@NAS:~# btrfs filesystem df /mnt/disks/nas-ssd-pool/ Data, RAID5: total=2.00GiB, used=1.00MiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=1.00GiB, used=112.00KiB GlobalReserve, single: total=16.00MiB, used=0.00B