contay

Members
  • Posts

    43
  • Joined

  • Last visited

Everything posted by contay

  1. Well I guess this settles it quite well. Thanks @JorgeB Are there any gimmics or tricks I should know? Since this is not a raid card, I should see drives just as they were connected to mobo, correct? This said 9300 should work with Windows 10 as well, if I decide not to go unraid I guess.
  2. Hello mates I have previously played around with Unraid on 2players 1PC -style setup. I have been running bare metal for a while, since things went sideways with previous player two. I have been thinking new Unraid for a while, but now I managed to snag some SAS SSDs, 3.84TB for ~50$ each, total of 8 drives. Mentioned here are couple of SAS controllers recommended. Does anyone happen to have personal experience with LSI 9300-8i for example? Do they work plug and play in unraid?
  3. Zenith II has only 8 phase as well. Doublers don't equal phases. Zenith Extreme Alpha, released January, has 16 phases. Other Asus boards run 8. Aorus Xtreme and Gigabyte Designare are solid, without ROG tax and run 16 phases, come with pcie 4.0 NVME addon card (4 nvme drives). Designare comes with TB card too.
  4. I ordered TRX40 Designare myself. Build quality excels with Xtreme, but I find no interest to all bling Xtreme comes with. Should it arrive next week, I transfer my system from X399 to TRX asap.
  5. Running TR myself with Radeon VII (x2) it would be nice indeed that these bugs are fixed RVII is passed trough nicely for both VM, but to restart VM I have to restart whole system since GPU goes to sleep (or something) when I shut the VM.
  6. So. Long story short: I have FusionIO drive which requires driver to be recognized. It works in windows environment with said driver. No problem. Now, is there a way to pass it to VM even it is not recognized by unraid?
  7. I definetly will wait. This was victory already.
  8. Legacy. I was fiddling around with files and used EFI- folder name as it now seems to come. I tried forcing uefi too. Didn't even boot. Sadly no. That still persists but I can live with that now I got system even partially running so I can set my Steam lib gaming rig for GF while I run some Apex. Kernel patch should fix this, you say? Today I try to combat those audio bugs and later I'll set up couple USB hubs for easier device access and switch.
  9. Okay so. As I was giving up, I did it. Newest drivers 19.7.1 works, both cards passed trough on their own VMs. I played around again with different combinations and way too little sleep, too many hours in this forum and then I found this one below: Modifying this for my system, using correct identifier for my first Radeon VII, I managed to put it trough. With OVMF and Q35 3.1 I succesfully installed current AMD drivers. Just Display and audio, no radeon software yet. Wanted to play safe first. I have minor audio issues, but I guess I am past worst problems. I hope. Thanks @bastlanyway for replying, and thanks @Siwat2545 where ever you are In to the new misadventures!
  10. System specs here: X399 Zenith Extreme Alpha with 2950X 64GB Ram 2x Radeon VII 950 Pro 512GB as cache 2x 1TB 860 Evos as array (parity+disk1) There are two issues I have encountered: 1) primary GPU, when passed trough for VM, has black screen. If I launch system with VNC, it boots and goes to Win10 install. With RVII system goes black. With Seabios I get few lines fo text and then black screen. 2) other VM with second RVII goes well and Win10 install has no issues. When I try to install AMD driver package, everything crashes. Any ideas? I am new to linux environment and Unraid so please be kind : ) What ever I could provide to help, let me know. Thanks
  11. If RAM is found on this list, it is Bdie and suitable for Ryzen/TR: https://benzhaomin.github.io/bdiefinder/
  12. I have 4x16GB Gskills coming along. Those are verified Bdie so there should be no issue to run with the 'The Stilt profile', which seems to be best non extreme OC and should be stable with all 2gen Ryzen.
  13. This is very good to hear. I was going to abandon my plan with 2990X for dual 16core system, but I trust it's doable now then. Just have to wait good, solid mobo to handle all the tasks I need to.
  14. Okay. So. I am planning to invest for Samsung 950 PRO m.2 SSD, which I must use with adapter in pcie slot. Would it be possible to passtrough it and boot that same installed Windows in virtual machine as I plan to keep "normal boot" too.
  15. That's a good idea. Had forgotten about the on board usb headers! There is very helpful guide for passing trough usb onboard controllers: http://lime-technology.com/forum/index.php?topic=36768.0 Even I managed to pull this off I have X79-mobo (Rampage IV BE) which itself has five usb controllers and I managed to get 4 usb3.0 ports for both VM which are hot plugable. I am sure it would work on X99 boards as they sure have at least same amount of usb controllers too. Also, there is even one controller left (which controls onboard usb 2.0). You just need to be carefull not to passtrough via same bus unraid stick is in : ) But sure, pcie bracket hooked in onboard usb-header would give lots of ports if needed.
  16. Just came in my mind. Should I go with stock, or just ramp multiplier up from 45 to, lets say, 42 while keeping voltage same?
  17. CPU passes IBT with maximum ram with current voltage and haven't crashed ever. I haven't actually tweaked memory at all, only enabled XMP. I might try with stock multiplier later.
  18. Now you mentioned, it is hauled up to 4.5GHz (From 3.5). I might try with stock clocks. However voltages shouldn't affect then?
  19. Well now you mentioned it, crashing seems to happen still. I haven't had really time to tinker with this, but I did manage to passtrough usb controllers for both my VMs without additional problems. Didn't need additional pcie usb card after all : )
  20. I got second 980ti (Second Gigabyte G1) so I can use natual Windows from different drive to run 980ti SLI. I have 1440p/144Hz monitor (Acer XB270HU) so I really can use all that power in new games. Like Division goes barely 60fps with single card all settings maxed and takes nice 5GB chink of vram. Anyway, pcie 3.0 has really 1-2% difference when using 16x or 8x slot so I am good here even I had to use throwaway card like you said. I still think it is better to sacrifice option for iGPU in favor of more cpu cores here. [email protected] pack quite a punch in single core performance and you can have six cores per VM. Or in my case 6+4 with 2 left hor host. 8 ram slots help too, currently I have 4x 8GB 2400MHz Dominators there so 12GB per machine (and rest for the host) is enough in most cases. LGA2011 boards sure have their advantages here, but LGA115X boards sure offer enough to have decent 2 headed gaming rig.
  21. Fellow gamer, hello. I started similiar project while ago. I'll sure follow your journey, even I am few steps ahead. I had single Gigabyte G1 980ti (later referring just G1) and I kinda just wanted an excuse for second card. So, I remembered I saw Linus techtips video about 2 rigs in single tower and watched it few timesand decided to give myself excuse enough. As I have X79 Mobo with 4930K, I have to sacrifice first pcie slot for old 7600GS which is gpu for host. Lga2011 cpus don't have igpu, so little downside there. 6core/12thread is nice, though. U might remember Linus using old 9000-series card for same purpose in their video. Currently I am having some crash problems with VMs I try to sort out, but I'll see them later. About usb controller passtrough: check this thread http://lime-technology.com/forum/index.php?topic=36768.0 Helped me a lot and I managed to do it the way I wanted. I've come far myself, having exactly ZERO experience with Linux until I started tinkering this about month ago.
  22. Originally yeah. But since I "found the secret" I put usb controller directly as I have enough controllers to passtrough for two VM and keep one for host where unraid usb is If you have enough available I agree it's better to use the onboard controllers if you manage to pass it through. If I am correct, there were five onboard controllers, 2 different for usb3 in I/O, one for usb2 on I/O, one for front usb3 connector and one for two usb2 mobo connectors as it showed total of six when pcie usb card was mounted. Anyway, thanks for tips and guide. About VM crashing, I might just remove it and make a new, there isn't really anything yet. Or just remove vm and use the disks when creating new. There where few posts to look similiar cases too. I'll report here if I encounter more problems passing trough rest of the controllers I intented : )
  23. Originally yeah. But since I "found the secret" I put usb controller directly as I have enough controllers to passtrough for two VM and keep one for host where unraid usb is
  24. Section 6. on first page. I know I want to passtrough all usb slots in bus 7 and 9. readlink /sys/bus/usb/devices/usb7 and readlink /sys/bus/usb/devices/usb9 Now. There are multiple 0000:00:00.0 format sequences. From first page I made out that only last part, in latter case 0000:0f:00.0 counts here. So, if I wish to passtrough all usb ports in bus9 I just copy this in my xml: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0f' slot='0x00' function='0x0'/> </source> </hostdev> and it should do the trick? E: worked, did it with here not mentioned controller as these were ment to "VM1" but VM2 did work, even after removing and replugging usb device. Great! I try same tomorrow with ports mentioned here for VM1. unraid just crashed upon exiting VM... Well, one step at the time. Thanks.