KptnKMan

Members
  • Posts

    267
  • Joined

  • Last visited

Everything posted by KptnKMan

  1. @Ford Prefect Yeah this all makes a lot of sense. Looks like there's even more reason to stretch for the bigger MT switch now, with the HW-offloading for VLANs being something I'd like. Are the SR tranceivers you're referring to 10Gb capable, as I see the 1Gb tranceivers cheaper when I look around. Honestly, I'd rather use ethernet if I can as I have spare CAT6/7, so this would be fine for me.
  2. Thanks everyone for responding and sharing your experiences, took some time but this is what I was hoping to discover before investing into some Mellanox cards. In the coming weeks/months I'm hoping to finally find the time to invest in the hardware. @JorgeB I'm looked into that CRS309-1G-8S+IN and it looks amazing. Currently my top pick. @Ford Prefect From what I've seen the DAC seems to be cheaper. Do you mean a pair of transcievers and a patch cable as comparison? @SimonF I've never seen that adaptor before, does that convert the physical x1 to a x16 SFF? That's amazing. The throughput of 6.09 Gbits/sec is a perfect compromise, I can do that. Did you have any issues with the SFF fitting or does it secure down well? You're using SFF cards for this to work? I've seen a few different models. @deveth0 thanks for the numbers, this is what I'm looking for. @JamesAdams it looks like that adapto could be useful, i'm gonna look into that. That's a quality suggestion.
  3. @Dexmorgan You have not mentioned if you have created a compatible BIOS, but that's an important step. You hexed your BIOS like I did?
  4. I've been following the official thread, and I believe the short answer is no. I could be wrong.
  5. Just wondering/asking (I've been following these issues): Should this be in the release notes of a new version on GitHub for better visibility? I agree, this seems like a good way forward.
  6. Update: Finally upgraded my backup Unraid using my spare Ryzen parts, as planned. Purchased and installed an ASUS TUF GAMING X570-PRO (WI-FI) motherboard, upgraded BIOS to latest v3405. Looks like the temp monitors don't work for some reason on this board, even though it seems to use the same sensors, and is almost identical to the ASUS TUF GAMING X570-PLUS (WI-FI) in my primary Unraid. Even detects the same drivers, but seems unable to detect any of the sensors.
  7. The plugin is broken and the maintainer has only in the last week communicated anything after a year of silence. My advice is to either avoid completely or use the script version at your own risk.
  8. I'm interested to find out more about this, I've tried researching, and I've tried starting a thread to find out more, but is there a way to "reclaim" the passthrough GP back to the unraid UI? Is this script part of the method of doing so?
  9. I'm not sure if this has been mentioned here, I don't want to assume I'm the first... But I'd really love to see better support for managing VM-passthrough devices natively in Unraid, like GPU passthrough and PCIe device management. I don't mean some bespoke configuration/profiles for particular hardware, what I mean is that I would appreciate if the VM config UI could be updated to set PCIe attributes like multifunction, slot, bus, etc... for GPUs, PCIe devices, or USB controllers (Which are just PCIe devices of course). We can set a BIOS file for a GPU, but most of the issues I see on this forum with GPU/USB passthrough is related to editing the XML for setting attributes. It would just be awesome if this was more transparent to the Unraid UI, maybe even with some logical opinionated defaults. I love Unraid and I use it literally every day as my base OS for nearly everything I do, even as my daily workstation. Device passthrough is something that I find makes it hard to recommend to other people because of the configuration required. I'm ok with getting into it, but most people are not, and would like a UI element for most things. Unraid is great, I think this would make it that much more great.
  10. That's interesting, and thanks for the pointer. Funny, I was poking around and literally just stumbled upon the video and script you mentioned, then saw you mentioning it. Video here: https://www.youtube.com/watch?v=FWn6OCWl63o And the github for the dump script: https://github.com/SpaceinvaderOne/Dump_GPU_vBIOS I thought maybe to mess around with this and see, but my prying eyes did see in the video that SpaceInvaderOne has an "unlock nvidia" script: @SpaceInvaderOne maybe I'm making an assumption here, but is this script to unlock the card for use?
  11. Hi everyone, I've got a working single-GPU passthrough configuration, and I sometimes switch between VMs without issue by stopping the first VM and starting another with a valid config and BIOS. I've been trying to find out how I can regain control of the GPU back to the host Unraid system. I can't seem to find anything in the forum, but maybe I'm using the wrong search terms or something. Essentially, after stopping a passthrough VM, I'm trying to have unraid regain control of the GPU output. Does anyone know how this is achieved?
  12. Hi @egtrev I cant tell what you're doing wrong exactly, but this sounds similar to the issues I experienced, and maybe your GPU BIOS is not working or invalid somehow. When I see nvidia passthrough issues (I made AMD work as well but nvidia seems a bit more stubborn), I usually try to advise the method I used. I upgraded my passthrough 1080Ti to a 3090 using the method by SpaceInvaderOne but I made a guide here: I made my 3090 work using the exact same method, and I'm using it right now, rock solid, multiple restarts, everything working. I'd advise taking a look at the HOWTO I wrote above, try the instructions precisely, and let me know if it helped you. Edit: Also I can't see what make and model of GTX960 you're working with. Could you mention what the make and model is?
  13. @LoneTraveler can you mention what exact 3090 model you're using? I can't find it anywhere in this thread.
  14. An update for this thread. I upgraded my system from the "EVGA 1080Ti FTW3 Gaming" to a "Gigabyte RTX3090 Turbo 24G" over this past weekend. Using these instructions in this thread everything worked fine. I downloaded a new bios from techpowerup and hex-edited it, confirmed all the hostdev addresses and everything worked. I hope this helps other people, post a reply if you found this useful.
  15. Hey @LoneTraveler I just upgraded my passthrough 1080Ti to a 3090 this past weekend. I used the method by SpaceInvaderOne but I made a guide here: I made my 3090 work using the exact same method, and I'm using it right now, rock solid, multiple restarts, everything working. I'd advise taking a look at the HOWTO I wrote above, try the instructions precisely, and let me know if it helped you.
  16. 50 hours? Yikes. I couldn't be happy with that, and this is also the reason I stuck with 8TB drives max until (unlikely, for many reasons) consumer SATA gets faster then 600MB/sec. My dual 8TB rebuild ran about 16hours in the end, but I think would have been much faster if the system wasn't in active normal use. Anyway, you haven't indicated any actual numbers on what you're doing, as I have, so there's no way to make any real conclusion about what is happening with your system. My backup Unraid server is also an aging system on PCIE2.0, that I'm planning to soon decommission, but I still get 95.8MB/sec on Parity-Checks: I'm not using 16TB disks in my backup system, of course, but it'll be a little faster than 95.8MB/sec when I swap in my currently free 6TB disks in there. In comparison, I have no idea what you're dealing with that would take that long. You're using disks twice the size of my primary Unraid, but sounds like more than double the time: You should post some meaningful numbers, like Parity-Check History and Diskspeed tests, if you want opinions.
  17. Hi @jbartlett and thanks for making this tool. I've gained a lot of insight into how my drives are performing. However, I'm still not sure of your question regarding the "Max allowed size", I think you mean the threshold of measured throughput, but I don't know where I can see or verify this. It didn't seem to be increasing above 45MB. I also tried running the same test with "Disable Speed Gap detection" enabled, and it still would not finish the test. So once the parity rebuild finished, looks like it was a little faster overall, despite the system being under normal load: Ran the Diskspeed test again (The test finished this time): The results look a lot more pleasing, with all the drives performing quite well together. Interestingly, there was a still a bandwidth cap on Parity1 and (the now new) Disk1: I'm not sure what to do with this result as yet, but it looks like the system is at least behaving normally.
  18. I can't say the line looked very flat'ish but I'm not sure if this is an issue with these older drives bursting data of something. I thought it would have had the opposite issue, with not keeping up with the other 8TB drives. Guess I was mistaken. Is that the data in the other graph? That was somewhat linear like this graph, but seemed to have got stuck retrying at 90%. I'm not sure if you're referring to that. I feel like I've summoned a genie by accident.
  19. Rebuild running on Disk1 and Disk2 concurrently: This with all VMs and Dockers running, although that should require few reads/writes on the array. I'm already pretty happy with the performance. When it's done, I'll see if I can run another Diskspeed test.
  20. Ok so I downloaded and ran Diskspeed yesterday, and have been running it a few times with mixed results: Unfortunately, 2 of my disks are slower, but these are the 2x 6TB disks that I'm replacing... so no surprise there. What is surprising, however, seems to be that one of the 6TB disks (Disk1/sdl) is bandwidth capped, and the other (Disk2/sdm) is causing issues finishing the tests, and seems to be retrying with Speed Gap errors every time I ran it: Interestingly, I never knew about this Diskspeed tool but it's interesting and has definitely confirmed my suspicions about these 6TB drives, being that they are (1) slowing down operations and (2) are proving a bit inconsistent alongside the newer 8TB drives. If anyone hasn't done so, I'd recommend checking out the Hard Drive Database associated with the Diskspeed tool to see how results of drives with the same model number perform. I assume that the Diskspeed tool uploads test results to this database. So at this point, looks like a good idea to get these guys out of there.
  21. So I'm wondering, since there has been no response from anyone at Limetech, is this something that would be viable for the future? With Linux projects like LookingGlass picking up traction and waiting for hardware vendors to catch up, I'm optimistic about it. I'm hoping to use SR-IOV one day in Unraid, or another platform for VM use. Is this even a possibility in Unraid?
  22. Sorry I'm not sure what you mean. The drives have had a couple TB free on each for some time, but the speed has been consistently at around 110MB/s. Recently they have been filling up a bit more, but the parity speed has been the same.
  23. Good to know, I'm also running dual parity.
  24. Thanks, that's great to know that things should work. As an aside question, I've been wondering why my parity checks run at about 110MB/second, but my drives are capable of above 200MB/s. Am I missing something in the calculation or is it a case of slowest drive, or something else?