FCruz2489

Members
  • Posts

    29
  • Joined

  • Last visited

About FCruz2489

  • Birthday October 24

Converted

  • Gender
    Male
  • Location
    New York

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

FCruz2489's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Hi! Just wanted to provide an update - since I added the line you suggested to the config AND changing all my PCI-e and NVME slots to disable aspm in the bios, the drives have been solid for a week. I'll continue to monitor it, but this is very promising and I hope it did the trick. Thank you again!
  2. Sadly, one of the drives disappeared with this setting enabled while i was running the cach pool BTRFS scrub/repair. But they are still connected via the Highpoint card so maybe I need to put them back into the M.2 slots with this new setting.
  3. Thank you so much for the response! I added that to my flash config and am rebooting the server now. My docker and VM images are jacked again, so I will reconfigure them with this setting enabled and see if all drives can finally be reliable. If I wanted to add this to the GUI boot option as well, would it be the text below or do I need a comma? append initrd=/bzroot,/bzroot-gui nvme_core.default_ps_max_latency_us=0 pcie_aspm=off
  4. Hi All, Sorry for the long post, but I have been experiencing some very strange SSD issues since I upgraded my platform from an i9-10900 to a i7-12700. Its been driving me mad. I had a stable Z490/DDR4/i9-10900 server which I migrated to B660/DDR5/i7-12700 and then W680/DDR5/i7-12700. I know SSDs are technically not supported in the array, but I am not using parity so I think its ok for my purposes (also understanding that my data is unprotected). ARRAY: TWO Intel P4510 8TB U.2 NVME drives - NO PARITY CACHE: TWO WD SN850X 1TB M.2 NVME drives - BTRFS RAID 1 ARRAY DRIVE ISSUES: Upon moving to the new platform (ASUS ROG STRIX B660-I) I started experiencing issues where the NVME drives would randomly drop out. CACHE drives were connected direct to the motherboard M.2 slots and ARRAY drives were connected via a Highpoint SSD7104 PCI-e 3.0 x16 RAID card with U.2 to M.2 adapters because of ITX motherboard limitation. CACHE drives were solid/stable, but the ARRAY drives would drop off randomly - only one at a time, never both. The server would also lock up and need a hard restart every few days. The motherboard was experiencing some coil whine as well so the entire situation drove me to replace the motherboard. CACHE DRIVE ISSUES: Now I have everything in a SUPERMICRO MBD-X13SAE-F-O W680 setup. The CACHE drives were connected straight to the motherboard via the M.2 slots and the ARRAY drives are now connected via U.2 to PCI-e 3.0 x4 adapters - got rid of the highpoint pci-e card since I have additional pci-e slots now. Memory in this system is NON-ECC Team T-Force Vulcan 32GB (2 x 16GB) DDR5 FLBD532G5600HC36BDC01. Everything seemed to be running fine and stable until after a few days the CACHE drives started disappearing and I was getting nvme drive btrfs errors in my system log. Only one drive would disappear, not both. I tried setting the pci-e link speed on the m.2 drives to 4.0 manually instead of AUTO - even though the slots on the motherboard and the drives are both 4.0. Worked fine for a bit and then one drive would drop out. Did some digging on the forum and people mentioned this could be memory related issue so I set the memory to default settings. Single CACHE drive dropped and error logs came back. At this point I figured it was something with the M.2 slots on the motherboard so I reintroduced the Highpoint SSD7104 into the system and put the CACHE drives on there. Again, everything worked fine for a bit and then one CACHE drive dropped off. I tried setting the PCI-e 5.0 slot on the motherboard to 3.0 manually to match the Highpoint SSD7104 card. Worked fine for a bit and single drive gone again. At this point I assume the Highpoint SSD7104 card might be the culprit since originally the ARRAY drives were dropping off connected to it (via the U.2 to M.2 adapters) and now the CACHE drives are dropping off connected to it. However, this doesn't explain why the CACHE drives were dropping off while connected to the M.2 slots on the SUPERMICRO MBD-X13SAE-F-O directly. As it stands, I don't know if I need to replace the Team Group DDR5 memory with some ECC memory/different memory or if the WD SN850X drives are not good as CACHE drives on this motherboard. I have had to reconfigure all my dockers and my docker image numerous times in the past few weeks. All of these drives are super expensive and have worked reliably in the past on the older DDR4 platform. I am hoping someone can take a look at my diagnostics and give me an idea of where to start on fixing this. I have done some research on ASPM to see if that could be causing issues with the NVME drives but I don't know what settings to change, if any. Thank you in advance for reading and any input at all! fc-unraid-diagnostics-20230215-1022.zip
  5. I purchased it from Newegg in the past month and it came with 2.0 installed. I didn’t install a 13th gen CPU, but assume it would have worked. also, supermicro has some compatible ecc memory on their US online store. 32gb sticks were in store for about $160 last I checked.
  6. Awesome, thanks so much for your help! After I posted here I found your other posts on a different thread so I had confidence it would work. I got my 10700 yesterday and everything is up and running with no issues! Got the hardware transcoding working and it handles it without breaking a sweat. Now I can sell my RTX 4000 (which was way overkill) and it will pay for the new CPU, Motherboard and case I bought haha. Really hoping the 6.9 RC is out soon, I don't mind running the beta but I'm excited for all of the other goodies it should include. Also, glad the latest AMD CPUs are so great they forced intel to put hyper-threading back into CPUs lower than i9s. I was tempted to get a 10900 but its just not worth the money for my purposes.
  7. Hi @lukeoslavia, thanks for posting your findings for everyone else to see! I'm getting my 10th gen i7 tomorrow and just wanted to check - did you load the same driver for the 10th gen as previous models? The "modprobe i915" line in the go file or was it something else? Thank you again
  8. Of course! This is for the GIGABYTE GeForce RTX 2080 GAMING OC 8G GV-N2080GAMING OC-8GC. I did not have to modify anything with a hex editor, the only change I made was to rename the extension from .bin to .rom (not even sure if I needed to do that or not). GigabyteGamingOC_2080_BIOS.rom
  9. Hey Guys, so I was able to boot directly into windows on my NVMe drive with the RTX2080 (bypassing the boot to unRAID from the flash drive). Once in there I dumped the card BIOS and copied it over to a flash drive - I also installed the NVIDIA drivers while the card was easily recognized. Once back in unRAID I added the card and audio device to my VM, also had to re-add the NVMe drive (via the XML) as it got removed for some reason. I wasn't able to boot at first, was getting some IOMMU group errors. Thinking back to @rinseaid's suggestion, I set the PCIe ACS override to 'both' and rebooted the server. Once back in I was able to boot successfully with GPU passthrough and my NVMe drive, although there was definitely an uncomfortably long black screen until the windows login screen finally popped up. At this point I am all set, I just need to figure out the best way to handle USB devices and for some reason my Logitech receiver's range is absolutely terrible. I have my server connected to my TV and it doesn't even come close to reaching the couch for keyboard and mouse usage (only about 7 feet away), I have to sit 1-2 feet away from the server to get reliable mouse and keyboard performance. I will get a USB extension cable and put the receiver closer to the front of the TV to see if that helps, if not I am also thinking about the possibility of passing through the built in Bluetooth card on my mobo and using BT for mouse and KB connectivity - but will have to do some research to see if that is possible. Just wanted to provide a conclusion to this adventure in case someone runs into the same issues and thank you all once again for the help, suggestions and responsiveness. This is an awesome community!
  10. Just got home and in the 15 minutes before I had to leave for a class, I cracked open the case, installed the new card and booted it up, NO GPU in unRAID..... Utterly confused I got back into the BIOS and loaded the defaults in a last ditch effort to resolve a potentional problem setting - and something I hadn’t tried yet. Booted back into unRAID and voila the card appeared! I feel so dumb BUT I never changed any advanced settings in there so I’m not sure what option caused it. Thanks again to all for the help! Now on to the next step after “plugging it in” at some point tonight when I get home
  11. That’s pretty sweet! I just got a replacement Gigabyte RTX 2080 today - the founder’s edition is on its way back to NVIDIA. Will try to get it going again at some point tonight if I have time. Will start with getting the card recognized first(!) and then move on to GPU and USB passthrough
  12. 100% confident no - but I'm 99% sure a 600W Corsair 80+ Gold rated PSU is more than enough for this card and my configuration, especially since I'm just booting it up. The card will only consume its peak 225W when at full load which should only happen when gaming or rendering. I am not even able to get the OS to recognize it so as you had previously mentioned its most likely some other hardware issue. Besides the card I have a 65W CPU at stock clocks and voltage, the mobo, 2 fans, 2 hard drives and 2 SSDs so its not like I'm stressing the PSU at all.
  13. Sweet! Hope it works for you on the first attempt like @rinseaid experienced. I will definitely be interested in hearing about it, thanks!
  14. So to update this, I reseated the card and the PSU cables on both end 3 separate times. I saw no change at all so I am just going to assume the card arrived DOA since I dont have another system to test the card in, or another card to test in my unRAID build. I have initiated an RMA with NVIDIA so when I get a replacement card, if i still have this issue then at least I will know it is not the card. Thanks again to everyone for the help and suggestions, Ill be sure to check back once there is a change, hope everyone is having a great weekend!
  15. Thanks for the suggestions guys. I have a Corsair SF600 600W 80+ Gold certified PSU with both the 8pin and 6pin connected to the card - but I’m going to pull the card and check all the cables to make sure they are inserted properly (they’re modular) and give this another go. My system is only supposed to consume around 450w at peak usage without overclocking so the PSU should be more than enough, especially for just booting. I wasn’t able to get a second card to test with but once I recheck everything, grabbing some kind of second card to check is my next step. I’ll update on status or progress, appreciate all the helpful suggestions!