Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About strikermed

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ahhh, I gotcha... For my purposes that doesn't really help :(.... I rather utilize the drives, and if I add a cache pool, I would use them to quickly flush writes asap. I could write a terabyte of data, and it would take hours to transfer the way he was doing it. Fortunately I got SMB shares to fly on FreeNAS via a Virtual Machine, so I guess they fixed those issues since he made those videos. Thanks again!
  2. Thanks for the confirmation Johnie.black. I figured that much. I noticed Linus Tech Tips had some kind of configuration I wasn't 100% was released or not. I'll be reconfiguring my network cards to optimize my 10GbE workflow and just have Unraid using 1GbE.
  3. Well that's a bummer, so that just means I'm waisting a good 10GbE card in that server. I'll just have to find another purpose for it.
  4. Here's the ultimate question.... How can I get 10gigabit transfer speeds via UNRAID (not a VM). It seems like the array setup just doesn't allow for anything more than what a single hard drive can provide. I would like to see speeds upwards to 500MB/s or more with the amount of hard drives I have in this thing. ideally this would be a RAID6 setup supporting redundancy, but also taking advantage of multiple disk speeds. I usually only have one or two devices accessing this thing, and I often utilize FreeNAS as my working drive, and UNRAID is off peak time backups. I would just like to see backups happen a lot more often, and a heck of a lot faster.
  5. I may have semi answered my own question. In UNRAID I had set up an additional VirtIO interface for a network path. I was under the impression that Bridge was going to use the gigabit ports, thus routing through my router and it's 1gigabit ports, but apparently that is not how it actually functions. By using the same IP that were assigned via DHCP from my router, I was able to connect to FreeNAS and get my before mentioned 500MB/s or more. So just as a knowled builder for anyone else. If you set up Virtual machines on UNRAID, they will automatically be connected to each other as long as they are part of the br1 interface. You will notice that they will show a link speed of 10gigabit, and it appears that they will communicate from VM to VM in that fashion. Obviously if you are contacting a separate device that's not part of the Server, like a PC, then you could see 1gigabit speeds in that scenario. If anyone else has any input please let me know. My initial thought was to set up a connection that could be dedicated to VM use, but it seems that idea failed, and I'll be tweaking my VM setup to remove that added connection.
  6. I'm seeking the mysterious 10gigabit speeds that come with the virtual connections that are assigned when you set up a VM. I have a Windows 10 VM that detects at 10gigabit speeds (this is a virtual interface). I also have Freenas running as a VM that also has a virtual interface that detects at 10gigabit. I can connect to Freenas via windows 10, but when I do any transfers, at max I get 200-245MB/s. I tried some driver tuning on the windows side, and matters got worse, with the same rates, but then drops to 0MB/s and halts. The only tuning I've been able to do on FreeNAS is change MTU to 9000. Pretty much I maxed out MTU, RSS feed, and Recieve and trasmit scaling. With all that said, I now can't get back to where I started after changing settings back. any help would be appreciated.
  7. Hi, I fully understand your networking side of things as I have been doing the same thing between windows 10 clients for over a year now. I was wondering what both your hardware and software configuration was on your UNRAID system. Currently I don't see 10GbE speeds with my configuration, and I hardly see 1GbE speeds. I wanted to know what drive configuration you used (array, cache, and parity) to verify if my setup would be capable of that. right now I have a direct connection to a FreeNAS box that provides 10GbE speeds (well close since I have 5 drives using 2 for parity). would really like to see UnRAID reach those speeds with over 12 drives in it. (utilizing 2 SSD for cache) Thanks!
  8. So, this stumped me for hours, and after reading over my troubleshooting steps I came to the conclusion I hadn't gone through my BIOS settings after adding a new CPU. Everything looked correct, but I had one setting that I thought was suspicious, since I was able to virtualize without any issue with a single cpu. It turns out with 2 CPU's it actually complicates the whole IOMMU paths, so this setting is crucial to have enabled. On my motherboard it was under advanced settings, and under PCI EX "Single Root IO virtualization support" (SR-IOV) Hopefully if anyone runs into this issue they can find this being their solution.
  9. First the error: internal error: qemu unexpectedly closed the monitor: 2017-07-08T22:34:48.518616Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio: failed to set iommu for container: Operation not permitted 2017-07-08T22:34:48.518646Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio: failed to setup container for group 44 2017-07-08T22:34:48.518652Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x9: vfio: failed to get group 44 2017-07-08T22:34:48.518667Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x9: Device initialization failed My hardware: Asus Z10PE -D16 WS dual intel xeon 2683 v3 (14 core cpu's) 128GB ECC memory (64GB per CPU in Quad Channel arrangement) 2 LSI 9300 HBA cards connected to 4 backplanes (16 bay) in those backplanes I have 5 6TB HDD (HBA cards in CPU 1 pci express slot) 1 intel single port 1GbE Network Card (in CPU 1 pci express slot) 1 intel Dual port 10GbE network card (in CPU 2 pci express slot) 1000W power supply platinum rating My problem: I have a VM set up to run FreeNAS with HBA and Network card passthrough. It functioned perfectly until I installed a second processor and moved the 10GbE network card to another PCI Express slot. I can't passthrough PCI express cards to a VM without getting the error above. I did some troubleshooting by trying no card passthrough and the VM starts up. I have a windows 10 VM also running without any passthrough, and it starts up. When I try to do any passing through of the HBA card or either of the network cards whether they are in CPU 1 or CPU 2's expansion slots, they don't start up. When I did tests with other cards essentially what ever device was first was the "group #" that showed up in the error report. In addition I tried adding the PCI card to my Windows VM and the startup still failed. I got the same error. Again when I remove the passthrough option it boots up as normal. Here's what I use when editing the XML, and it worked flawlessly when I had a single CPU: This goes before the last "<Device>" text in the XML: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/> </source> </hostdev> I should note that I verified that all the devices I'm trying to pass through are on separate IOMMU Groups. Example. LSI 9300 are on IOMMU group 44 and 46, the 10GbE card is group 52 and 53 for each connection, and the additional network card is on group 45.
  10. Hi everyone! I have an unusual network mapping request. Instead of mapping to the UNRAID server, I would like to map the UNRAID server to another file storage system. My end goal is to allow the Plex Docker to access an external NAS or a Virtual Machine's storage to stream content. To give you an idea of my actual set up. I have several virtual machines, one is Windows Server 2012 R2, One is Windows 10 Client, and the last is FreeNAS which I use to handle my storage needs at the moment. I would like the Plex app to have access to the FreeNAS storage to stream my video content. This would lighten the load on my Windows 10 PC which runs the Plex Server and accesses the FreeNAS storage to do so. Is there a way to map this, or do I need to add it to the array in some way as a storage device in some way? Thanks!