MajorArchitect

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by MajorArchitect

  1. may not help for this topic; but when that popped up for me I just put in the IP of the unRaid server and I was able to access shares that way, even though it wouldn't resolve the hostname. If it's a static IP you may just use that. Ex: \\192.168.***.***
  2. May I suggest trying enabling Legacy Option Rom under advanced boot options and possibly also switching the boot mode to Legacy under Boot Sequence/Boot List Options? I have had overall better luck booting unRaid in Legacy with my config. I found a similar topic on the T20 that suggested it might (Oddly) help your problem. Worth a shot. Last post in link: https://www.dell.com/community/PowerEdge-Hardware-General/PowerEdge-T20-no-boot-without-monitor/td-p/5146974
  3. @jonathanm Yes. This was the first to pop up on google for me though. After posting here I saw the new thread and posted there as well. I wanted to make sure people can find a solution. I've had unRaid since 2016 but I'm new to posting on the forms, so I'm open to advice. Thanks!
  4. As he said, they can be changed from Settings/Disk Settings But also individually by going to Main and selecting the disk specific disk you want to change notifications for Then changing the values
  5. You should also be able to change it per drive by going to MAIN then clicking the drive and changing the values Click Disk Change Value
  6. one way of changing alerts for drive temps is under Settings/Disk Settings
  7. can you show us the info with the iommu groups from System Devices under the Tools tab? (you should be able to attach a text file in your reply)
  8. As far as drivers during my setup a year ago I did notice that the newest Nvidia drivers resulted in code 43 even with some common workarounds. I run 4 Nvidia cards on 4 VM's and each have been stable for over a year, I'm very happy and impressed at the stability. The config that worked best for me was booting UnRaid in Legacy mode using the dumped vbios' for each VM and using an "older" Nvidia driver, the newest that I can confirm works stable for me is 399.24 but 388.13 or 368.81 also worked. Its fairly easy to dump the vbios using TechPowerUp GPU-Z but you may need to find an older version if it doesn't dump with the new version, also maybe try booting a standard machine with the card in windows, I remember it initially gave me errors when I tried to dump vbios from the VM. You will need to use a hex editor to remove the header in the vbios that is added by GPU-z via this method. Hex Editor (HxD) https://mh-nexus.de/en/hxd/ video for Hex modification (go to ~4min) Nvidia Driver 399.24 (shouldn't be necessary) https://www.nvidia.com/Download/driverResults.aspx/137727/en-us This is what I have added in the xml for my VM regarding Hyper-V (for me lines 33-41) <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> <kvm> <hidden state='on'/> </kvm> ##Helpful side note: If you want to passthrough a Hard Drive 'directly' instead of a vdisk... Source: https://www.youtube.com/watch?v=QaB9HhpbDAI UPDATE: I just found out that SpaceInvader uploaded a new video for this issue 2 months ago If you specify the slot and multifunction mode in the XML like he did, with the other things mentioned; I can confirm driver 442.50 works with my GTX1060
  9. in short, yes, I used way beyond 32GB in windows. I didn't get to trying a different distro of Linux. HOWEVER I did get it to work in unraid by moving the 16GB modules from channels C and D to A and B. I did one start with just the 16GB modules saw 64GB then put the rest back in and have 96GB in unraid. Kind of strange because I reseated it twice the way it was and it didn't work. It looks like somewhere something didn't like the 16GB modules being in channels C and D. May have to do with my motherboard and how it reported the modules to unraid. For reference the Mother Board is a Gigabyte x99P-SLI and it is worth noting that some gigabyte boards (this included) are known to have strange issues where trident z RGB memory does not show on half of the channels in the LED software and it is unable to modify the SPD data on the RAM. although I have trident Z, the 16GB DIMMs are NOT-RGB. So in hindsight maybe this belonged on another topic but it may still help people if they have a similar issue. (Edit) I just noticed the original post also used a Gigabyte board as well. And although people concluded it looked like the memory modules that weren't compatible based off the QVL, some people may have something else happening. Someone mentioned configuration before, in my case it didn't even want to post if a 16GB dim was in the same channel as a 8GB (understandable, just saying); And then the issue I had where moving memory channels fixed it. So making sure that they match based off of what channels they are in may matter, and even then toying with what is in which channel may help.
  10. I have 96gb installed 4x8GB and 4x16GB. my issue is similar in that it only shows 32GB in unraid as usable and installed, in Windows 10 everything works PERFECT all 96GB show, but 64GB of the ram is missing in unraid 6.6.6 or 6.7.2