mustava

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by mustava

  1. is there anyway i can force the plugin to use the old 460 drivers?
  2. After some further research, it seems to be a Linux/Unix/VMware/Nvidia driver issue. GPU works fine passed through to windows. As per the previous users issues, apparently driver version 460 was the last confirmed working. Apparently the new beta versions worked if the opensource kernel modules are used?
  3. Hi All, I have just installed a 1660Super into my Unraid (6.11.4) server and am following the process outlined in this thread to get it up and running, but am experiencing a few issues. Although my GPU is shown in my system devices, once I have installed the nvidia-driver (driver 525.53) plugin, anytime anything interacts with the GPU the whole server hands and i need to reboot. This is both with the array started and stopped. This includes: - Generating diagnostics (when nvidia-driver is installed) - Launching the plugin settings page The devices is shown in System-devices as: [10de:21c4] 1c:00.0 VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660 SUPER] (rev a1) A few notes: I am running under vmware, but pass-through works seamlessly on 3-4 other devices including PCI devices, and iGPU. I have attached the diagnostics I can access, but I cannot generate them with the plugin install, as it gets stuck on nvidia-smi.txt Appreciate any advice tower-diagnostics-20221119-2354.zip
  4. Attached tower-diagnostics-20210605-1113.zip
  5. If I assume the disk is fine, is there a way i can add it back to the array without rebuilding?
  6. Hi All, I recently swapped my CPU over to a higher model. Upon reboot of Unraid, one of my disks became disabled. Initial inspections of the disk, it appears healthy? I am unsure of the process to re-enable this drive? do I have to rebuild it? is there anyway to re-enable it without a rebuild? Appreciate any advice! tower-smart-20210604-0117.zip
  7. After some further tests, i found that anything < 1400 works.... although everything else in my network sits at 1500. MTU is set at default of 1500 everywhere including interfaces, and dhcp server
  8. Hi All, I have encountered a very strange issue that im struggling with. For some reason I cant pull any images from docker unless my Eth/bond MTU is set to 900. (down from 1500). My network and DNS all work as expected, also tested from the Unraid cli. Will running 900 MTU cause me any performance issues? I am surprised that it has any effect on pulling images from docker?! Any help or advice would be appreciated.
  9. I am now getting this error too. I have checked my DNS and everything else is resolving fine.
  10. Wow ok - I just found the issue by chance two minutes after writing this, despite trying to fix it for hours.... Due to the new ESX installation, the vSwitch for my LAN network had the 'MAC address changes' security setting set to 'Reject'. Reject. If the guest OS changes the effective MAC address of the virtual machine to a value that is different from the MAC address of the VM network adapter (set in the .vmx configuration file), the switch drops all inbound frames to the adapter. If the guest OS changes the effective MAC address of the virtual machine back to the MAC address of the VM network adapter, the virtual machine receives frames again. I guess due to the OS having existing values, and the new install/new NIC's were assigned different values and frames were being blocked by the switch. The more interesting thing is why it worked, slowly, rather than at all...
  11. Hi All, I successfully upgraded from esx 6.7 u2to esx 7.0 Last night. I had a bit more of a bumpy experience, Thus far: Had an issue updating the existing usb bases esx install... installed as per normal upgrade process, successfully booted from USB into esx 7.0 and started up all my VM's including unraid 100% perfect. Needed to reboot the server to toggle pass-through on a device... ESX never came back up (vmware recovery error) and forced me to start from scratch Re-installed ESX 7.0 and re-registered VM's. Boot unraid perfectly, everything comes back up perfectly including my RDM's for onboard attached sata drives. (I have a mix of RDM's and pass through drives which have been rock solid in 6.7) From this point, I have started noticing super slow performance. First in plex, and now upon further inspection even just browsing my shares. It takes a full minute to even open a share or sub folder. This is true even with nothing else is running on the server. Any one have any ideas? I cant see any suspect errors in the logs. Good luck to anyone taking the plunge!
  12. That sounds exactly the same as my issue. CPU gets stuck at 100% and cant reboot the server. I will try unplugging the drive next time the issue occurs.
  13. Yes still same issue, even if i stop everything using the Disk / Array I still cant shutdown the server cleanly. Is there a way to restart the Unassigned Plugin?
  14. Hi All, I am encountering an issue where each morning, I log into Unraid to find Unassigned devices unresponsive. Everything in the GUI and array is responsive, except the one disk mounted via unassigned devices and the GUI for unassigned devices. (just the loading animation). The disk is being used by a VM to write to over night as a bit of a scratch disk. I Also am unable to shutdown or reboot the server in this state, I am required to do a hard power off in order to restart everything. I am worried doing this is going to cause problems pretty quickly. Has anyone had this issue before?
  15. Hi All, As the title says, I have encountered an issue where each morning, I log into Unraid to find Unassigned devices unresponsive. Everything in the GUI and array is responsive, except the one disk mounted via unassigned devices and the GUI for unassigned devices. (just the loading animation). The disk is being used by a VM to write to over night as a bit of a scratch disk. I Also am unable to shutdown or reboot the server in this state, I am required to do a hard power off in order to restart everything. I am worried doing this is going to cause problems pretty quickly. Has anyone had this issue before?
  16. I moved the drive that I was rebuilding to the onboard controller (which required an RDM) and the rebuild completed successfully! What a crap controller.... seems impossible to get a firmware update for it...
  17. I can try and move the disk that is dropping onto another controller.... I have heard that if you update the firmware on the controller then this problem can be mitigated... Anyone know how I could update the drivers or firmware? I cant seem to see any software updates for the controller listed on the website... http://www.iocrest.com/en/product_details277.html The other recommended controllers seem to be very expensive to my uses... Could there be any other reason why I would get errors?
  18. Hi All, I recently had my first disk fail after many years of seamless operation. I quickly rushed out and bought a new disk and installed it, replacing the failed disk (i will do some more in-depth testing once my array is back up). Thankfully I had a valid parity. Overall I'm not to worried about loosing any pending writes as most of the data stored is media I can replace (with some personal data) Once the new disk was installed, I proceeded with the rebuild process however at some point it looks like it has failed - Unraid then showed massive errors across a few disks including parity (e.g 3000000000) which i think looks to be a false value. My log drive also fills up and crashes the GUI by the time I wake up to see the rebuild failed. The disk also seems to have the correct files on it from what i can tell, but unraid always begins a new rebuild once i add it into the config. I tried rebuilding a few times but always get the same result. I am unsure that I have captured the right logs and have since rebooted the server. I am too scared to try another rebuild as some of my disks are old. I have backed up the data on the emulated disk ... but i am not sure what the best approach is to get my array back to a healthy state. Does anyone have any suggestions? (new config perhaps?) Appreciate any feedback (with new disk) tower-diagnostics-20190730-0540.zip (with original disk) tower-diagnostics-20190722-0247.zip
  19. Hi all, I am running unraid 6.3.3 under ESX 6.5 and have run out of onboard sata ports to add new drives to my unraid server. I have done a whole ton of searching on what the best PCIe sata controller but haven't found much clarity about what is best- especially for virtualized deployments. Does any one know some good 4 port options, or at-least one that will work without issue? I do not care about the speed too much. Before anyone asks, I use ESX for my homelab for work - this is why i have opted to not use the virtualization support in unraid. Everything has been working seamlessly. Thanks for any advice!