contay

Members
  • Posts

    43
  • Joined

  • Last visited

Everything posted by contay

  1. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x00' function='0x0'/> </source> </hostdev> Okay so, here is the problem. How do I figure a) adress domain b) slot c) function? At least I figured out which usb port goes by which bus and where unRAID is so I got that going for me, which is nice.
  2. Okay. I'll try out it then. Seperate pcie usb card should go with same method as mobo controllers?
  3. Hello. As a new user without previous Linux experience this problem comes little hard. Could it be possible to do this in (web)GUI in xml-editor? I have Rampage IV BE as mobo and in addition seperate usb3 pcie card. Id like to dedicate pcie-card (2 ports+ 2ports in front) to one VM and few ports to second VM. Could it be possible to find number of usb controllers in GUI and then proceed editing XML for each VM?
  4. I did some testing last night, couldn't passtrough pcie usb card. Need to figure something later. However, I did get two seperate Windowses at the time but host crashed again. Need tos figure something here too. It was almost 2am and I woke up 6 am so I was a bit tired there. So it might have been just something minor. I'll keep you posted : )
  5. Ah. Okay. Would not actually help me, as second boot is nonvirtual windows running 980ti's in SLI. I should still disable first pcie slot when not using unraid. In AMD system, like PowerJunkie is planning, this could ease things up for sure. Didn't notice you were not the OP. My bad. I am sure he appreciates all info that comes up.
  6. Ah. Okay. Would not actually help me, as second boot is nonvirtual windows running 980ti's in SLI. I should still disable first pcie slot when not using unraid. In AMD system, like PowerJunkie is planning, this could ease things up for sure.
  7. Here comes the plot twist: unraid is my first touch to Linux distros. I was inspired by Linus Techtips video of having two rigs in one tower and I decided that I could try that. It also justified me to buy second Gigabyte g1 980ti if I at least try. Anyhow. Like I said, I had zero experience tinkering around with Linux or virtual machines before so Google and this forum have been very useful. Last week I didn't have much time as I am searching project for my engineers thesis (energy technologies here) while working in my first profession. I did, however managed to get stable VM and played around a bit, as in played games. I don't kbow what I did different than last time, maybe just lucky? Host froze once while in VM windows and I had to hard reset it, but VM started after reboot succesfully with no crashes. I have pcie usb3 -card coming which should help me and have hot plug usb slots. I try to passtrough it for "primary" VM. It gives 2 ports behind and I can use case front ports trough it so total of four ports. Just enough for mouse, keyboard, usb headset and one for hot plug drives/sticks. About pcie slot: Rampage IV Black edition has four full size pcie-slots, 16x/8x/16x/8x. I had to sacrifice first 16x for host gpu, which always is first gpu. No iGPU on lga2011 cpus. Third slot (second 16x) is empty so I have enough spacing for airflow for upper 980ti. For disabling pcie slots, I don't know how 2011v3 mobos do it, but I have mechanical on/off switches on motherboard which I use for this. As third and four pcie slots are 8x only, I can't run 16x/16x sli, but after all 8x/8x is 1-2% loss in power. Oh. About host crashing. Two things I have changed actually. I got rid of the raid and replaced it with single 500GB 850 Evo. Now unraid cache has 3x256 in data raid0 / metadata raid1. Having all drives in AHCI in system might do something, even raid wasn't oart of the unraid. Now I worry having one SSD under chipset and two under seperate controller. I might change it later so all cache drives are under ASMedia and HDDs go under chipset. Second change: last time I used cores 0-5 on first vm, 6-9 on second and 10-11 on host. Now host got 0-1, first vm 2-7 and second (which I played around) has 8-11.
  8. Not all Broadwell-E are Xeon. They are just "second gen" LGA2011v3 CPUs, i7 also. First Broadwell-E launcing eill be Xeon though. I had very similiar udea with my LGA2011-board (Rampage IV Black edition) and 4930k (6c/12t, running 4.5GHz). I have seperate drives (raid0 controlled by chipset) where I have "natural windows" and I have 2ssd and 2hdd for unraid via asmedia controller, so they are always AHCI. I run 980ti SLI on natural windows and disable first pcie slot when doing so. When booting unraid, I enable first pcie slot where I have Nvidia 7600GS gpu for unraid host. This is something you need to take into account. Currently I keep my host crashing and I am tinkering with it when I have time. I use 6.2Beta since trial key lets you have six drives like basic registration key. I'll buy real key later, either when I get this working or when trial ends and they won't allow me to renew it. I have decided to get this running however.
  9. As topic says. Host crashed, so I had to reboot my machine via reset switch. When Unraid (newest beta) rebootet, my VMs were gone. I had two win8.1 VMs running, one was installing and other already running benchmarks. Should I have manually used "Mover" or did I miss something important? VMs were installed directly to SSD caches.
  10. Hello. I am currently upgrading my rig to two headed gaming rig and I have figured out most, latest thanks to guys who helped me with cache & ssd questions. Now, I am looking for suitable HDDs for parity and data drives and since I don't have really massive need for data or server space, I figured ~2TB would be fine to share for two. So, I was looking srives and got offer from friend: 2x 2TB WD red nas drives. Problem is, are they suitable for my needs. All VM OS and apps will be stored on ssd cache, these are for parity and some small data only. Maybe in emenergency for some apps. These WD reds have something called "variable speed". Would it cause problems in described use if I keep all games and stuff on ssd cache? Thanks, C.
  11. Thank you. This helps me setting next course (Getting couple of HDDs, for start). From what I have read, unRAID always claims gpu located in first pcie-slot if there is no igpu? At least with nvidia cards. I'd prefer having my 980ti's in 1st and 3rd slot, but it seems I have to go with 2nd ja 3rd slot with 7600GS in first slot for host. Is there anything to change this?
  12. Can I set it to raid0, If there is nothing to lose? Just VM windows and steam/origin, everything re-downloadable if one drive fails. I know, I have to make whole thing again if one drive fails, but still.
  13. SSDs are not officielly supported, but they do work. Long term dataloss might be an issue sure. So, 2x HDD (parity+array) and SSDs for cache. 2 drives would go for raid1, 4 drives should make essentially raid10 if I understood this thread right https://lime-technology.com/forum/index.php?topic=40804.0
  14. Yes, I am aware of this. Howerer, I didn't have any HDD available weekend, and I succesfully ran windows 8.1 on VM while whole system was based on SSDs. In the long run, it would be better then having fast HDD, lets say WD black, as parity and data and having cache of SSDs in raid0 (maybe raid10?).
  15. Ah okay. So, lets say I use only my SSDs (I have six of them if I sacrifice my windows drives) for now: I use one 256GB for parity: this determines maxinum drive size. Rest of the drives are identical: I plan to use all six from basic lisence for unraid, no additional drives for dualboot windows. I use one 256GB for cache, from where I plan to run VM OS. Rest four drives pool up for data, basically 4x256 (of course every drive has less usable space). Other option is 2x256 for cache, will be raid0 if it is possible then, and 3x256 for data pool. With parity drive, this makes up 6 drives as basic version allows.
  16. Yes. I am aware of that. I made testings with 3x 256GB ssds: parity+data+cache. I had to remove windows drives as trial supports only three drives. But, As I said, I am willing to pay what suits best my needs. So: If I use 256GB ssd as parity drive, another for data and two for cache, do I have raid0 cache or raid1 cache? Or how would I maximize available space based on currents drives. This is rather important, because if cache drives are always raid1, then there is really no use for more than two in my case? Whole drive system is a bit blurry for me still. Sorry.
  17. Hello everyone I am new to unraid and forums, but I have already tested my hardware and some goofing around with one VM. It works, which is very inspiring for someone with zero experience outside Windows. As title hints I am planning to build gaming rig with two VM Windowses but I wish it to remain capable of utilizing all hardware when needed. For this I have own drives for my "personal windows". So. To round up what I have, what I want and what I am wondering: -I headed gaming rig, 6 threads each from my 4930K (6c/12t) -one trusty 7600GS for unRAID (good bye 16x slot!) -980ti's, one for each "head" -32GB ram, around 14GB for each "head" -four drives With great drives comes great problem: Currently I have 4x256GB SSDs. Would I benefit anything having four cache drives and 2x data drives (let's say 2x2TB HDDs paired). How would I make most of my SSD drives, if I want maximum available storage? Thanks. Also. I am willing to buy any registry key needed if it suits my needs, but 2xHDD and 4xssd would be usable with basic version. For now, drives with seperate Windows can be ignored.