d4nk

Members
  • Posts

    11
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

d4nk's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I tried this and it did not work for me. I'm still getting the following in my logs for this particular work unit, all others worked fine: ERROR:There is no domain decomposition for 10 ranks that is compatible with the given box and a minimum cell size of 1.4227 nm I'm on an AMD Ryzen R9 3900X (12C/24T). I edited the /config/confix.xml file in the docker container with vim while it was running. Tried restarting after editing the file, still same error.
  2. This is Solved. The issue was with eth0 being assigned to the NIC I wanted to use. But I couldn't change it because I had already done the pci-stub. The solution was to remove the pci-stub from syslinux config, restart, change the NIC to be eth2 in the Network Settings under Interface Rules, restart, add the pci-stub, restart, and then, it worked correctly. However I then ran into the issue that IOMMU was not enabled, which is required to pass through a PCI device to a VM. I enabled that on my AMD motherboard (not easy to find in the BIOS) rebooted, and finally was able to easily assign the PCI NIC to the VM in the GUI (no need for VM XML editing or virt manager). After that, setting ifconfig promisc in the VM worked like it would on physical hardware. Success! Easy.
  3. This is all I have under Interface Rules in Network Settings when the Array is Offline. No option to switch what Unraid uses as eth0.
  4. There is no option to "switch NIC" in Network Settings. eth1 is configured with the IP, gateway, DNS, etc. eth0, the one I want to pass through, is not configured with an IP or anything but it is in the Network Settings at the top and there's no option next to it for "Port Down".
  5. @jonp What do you think? I'm following your guide.
  6. I want to pass through a NIC to my VM for use with promisc mode, so it can do network monitoring. I have another NIC which Unraid is using for normal ops. I read here https://forums.unraid.net/topic/37959-guide-passing-through-network-controllers-to-unraid-6-virtual-machines/ that all I need to do is add pci-stub.ids=[id] to the startup config but after doing that and rebooting, the NIC is still in use by Unraid and I can't do a port down on it because it's eth0. lspci shows: 08:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) lspci -n shows: 08:00.0 0200: 8086:1539 (rev 03) Here's my syslinux config: kernel /bzimage append pci-stub.ids=8086:1539 initrd=/bzroot But after rebooting, that NIC is still showing up as eth0 in the Network Settings and when I configure passthrough in the VM settings: <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> </hostdev> I get error: "unsupported configuration: host doesn't support passthrough of host PCI devices" What am I doing wrong? I'm running Unraid 6.8.2
  7. I had a failed drive, it was 3TB, however I had 20TB of free space in the array so rather than go out and buy another drive which I don't need, I shrank the array by moving the files from the failed drive (emulated) to the other drives in the array using unBALANCE. I then created a new config without the failed drive and let parity rebuild following this procedure: https://wiki.unraid.net/Shrink_array It was during this rebuild that I had a power failure, server rebooted, restarted parity sync but one of the drives didn't mount correctly, thus this problem. That did it! Everything is rebuilt, no data loss, array is healthy! When the array is offline, you can remove a device from the assigned devices and then mount in unassigned devices to check it's contents.
  8. Thanks! I followed that and installed xfsprogs 4.20 but I still ran into the hanging issue. I then found an article from 2009 that recommended using -P whenever xfs_repair hangs or becomes unresponsive while running and that did the trick! After running xfs_repair -Pv /dev/md1, and then running again with just -v, xfs_repair reports that the drive is good! However, the drive is disabled in the array in Unraid. How do I tell Unraid to re-enable it? BTW, Unraid ran through a parity rebuild while it was unable to mount the drive due to the xfs errors, meaning parity does not have this drive's information. If I mount the drive using "Unassigned Devices" the drive mounts perfectly fine and I can access all of the files. So how do I tell Unraid that parity is false, drive is good, re-enable it and rebuild parity?
  9. xfs_repair -v /dev/md1 (this is for Disk 1) The array is in Maintenance Mode. So the upgrade to 6.7 RC5 did the trick as far as clearing the error, but now xfs_repair is freezing at Phase 7 - verify and correct link counts. It just sits at "resetting inode 33093704 nlinks from 6 to 4" for hours. I also tried in the GUI, same thing, freezes at the same spot.
  10. I have a healthy drive (according to SMART tests) but it is failing to mount. As far as I can tell, the next step is to run xfs_repair. However, when I run it against the drive it errors out with: This is apparently a known bug in xfs_repair version 4.16.1 (the version included in Unraid 6.6.6): https://access.redhat.com/solutions/3483841 This is fixed in the latest version of xfsprogs 4.20.0. ... So how can I update xfsprogs on unraid to the latest stable release? Or is that the right solution to this problem? Thanks!!!