Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Lev last won the day on April 13 2018

Lev had the most liked content!

Community Reputation

65 Good

About Lev

  • Rank
    Advanced Member


  • Gender
  • Personal Text
    Supermicro X9, 128GB ECC DDR3 @ 1600Mhz, 90TB (2x 6TB WD, 8x 8TB WD), 1x LSI9308 Cache: 2TB Crucial SSD

Recent Profile Visitors

1135 profile views
  1. @SpencerJ question: Looking back from where Limetech began in 2005, what's been the most rewarding part of the journey over the years for Tom personally?
  2. We all are brother. I love my USG but how much I wish for something better beyond there current offerings. That said, it's still the best for my use case compared to the competition.
  3. I have used untangle for the last few days. Some thoughts... - Lots of charts and reports. Even a daily email with reports. Most if not all these reports the Unifi USG will also give, pfSense does not, or at least not without some work to setup initially. - Emails from the untangle sales team chat-bot. Even though I marked myself as a 'home' user, the automated bots are hot to sell me a license. - Very easy to setup and get running. Very easy user interface. I think this is it's main advantage over pfSense. After two days I completed what I wanted to find out about untangle and turned it off the server it was installed on. I'm happy I did it, I now know more than I did before and what works best for my use cases. I think my recommendation for @gacpac is to get started with Unifi USG as it'll cover 99% most use cases and it's integration with other Unifi products makes it so easy to manage a home or business network.
  4. Thanks for this. Good things cost money, just like Unraid. I'm going to give untangle a try.
  5. Between those two, definitely USG. I don't believe the ASUS RT-AC66 gets merlin builds anymore either. It's rather old. Yes brother, you're not alone in feeling that way.
  6. @hawihoney some positive news to report. I did a PCIe pass through using one of the four physical network adapters in my server. Then on my host mounted the smb share of the VM guest Unraid using it's IP address assigned to this pass through adapter. It works! My Unraid VM has two network adapters now. The br0 virtual adapter and this second pass through adapter. Also more good news... I'm seeing transfer speeds at my disks maximum, >160MB/sec which is more than my one network adapter. I suspect this is smb multichannel at work spreading the file transfer across both network adapters. I'm very happy this works, now I can really scale out my 50+ hard drives across multiple Unraid VMs in a single server.
  7. We're both running into the same wall. Glad I knew I wasn't the first, otherwise I would of kept running into thinking I wasn't doing something right. I appreciate your detailed posts in this thread. Same, seems from all our testing it's the only method that works reliably and has expected performance. That's one approach, I have another suggestion. Might be more reliable to use a common linux command (iostat) to query disk activity on the parity or plex disk to see if it's under heavy load. It has a few advantages to using the hidden file check method... more common tools agnostic to the application, and you can define a granular definition of 'heavy load' is to each applications use case.
  8. I've got this working but I'd like to reverse the example to share a VM filesystem and then mount on my Unraid host using VirtIO. Is this possible somehow?
  9. I've run into this too and I must say I'm very thankful you forewarned about it so I knew in advance. I'd really like to centralize everything around the bare metal server host so that the VMs are just autonomous and require no access to make everything easier to manage. Here's what I've tried and what the results were that are centered on my use case... initiating all transfers from the host and copying to the target VM. VirtFS 9p Virtio All my research pointed to VirtFS or Virtio FS as the optimal (least overhead way) to share a host file system to a VM. @eschultz has made some great posts about it. It has some limitations it appears... Performance of host disk to VM guest disk is about 38 to 40 MB/sec on average. I ran some iperf3 tests that @bonienl demonstrated and confirmed I get the same results so there appears to be a fast communication layer from the VM guest to the host. It only works within the VM, so it doesn't work for my use case. NFS (via UnAssigned Devices) While I was about to mount on my host the NFS export on my VM I found that the nfs exports are easy to overwrite and restart in whatever my usual actions are in the GUI. It was hard to keep this mounted, however when I did stop tinkering with things and tested speed, large file transfers from the Host to the VM would fail after a few minutes. SMB Large file transfer initiated at the host copying to the VM all fail after a minute or two just like you described @hawihoney. Transfers initiated at the VM and copy from the host's SMB share work stable and at full disk performance, roughly 170 MB/s So I still haven't found a method yet to initiate a file copy at the host and push out to VM Unraid guests. Any suggestions I'd love to try some new approaches, or if there is a better work flow to consider, please share.
  10. I'm curious to hear an update on this. Perhaps there's a entertaining story to be told from behind the scenes where our heroes @OmgImAlexis & @limetech ventured forward 🏇 to enhance this feature but ran into some unforeseen challenges along the way.... 🐙🤺
  11. Thanks for the details @hawihoney on the smb mounts. I tried a different approach, to not have all the overheard of SMB and less pitfalls you encountered using that networking method by instead using the VirtFS. This passes thru the host bare metal server filesystem to the VM enabling me to use Midnight Commander or Krusader within the Unraid VM and manage everything all from one place. Here's a great whitepaper I found, perhaps you'll find it interesting too: https://www.kernel.org/doc/ols/2010/ols2010-pages-109-120.pdf However if I have a second Unraid VM and another JBOD like you do, then this method doesn't scale. I could either use the SMB method like you did, to share the VMs back to the bare metal host. I have more research to do, here's the questions I'm thinking of in regards to finding a solution with VirtFS. It would be great if the bare metal host could see the VM filesystem thru VirtFS, is that possible? Is it possible for VMs to access and mount each other's file systems through VirtFS?
  12. I've been following the same approach you posted. I just got step #6 working tonight. I don't understand step #7, can you explain more it's purpose and what it achieves? My situation at the moment is my VM can see all my hosts drives using the VirtFS direct mount, so that's good. However I'd prefer to have it work in reserve so that my bare metal host and mount the guest VM's filesystem using VirtFS.
  13. @johnnie.black this is a tremendous resource and was interested in your thoughts on what would be the expected speeds to determine the optimal configuration for a for 30+ drives in the following configuration: SuperMicro SuperStorage 6047R-E1R36L configured as: BPN-SAS2-846EL1 (front 24 bays) with three 8087 connectors BPN-SAS2-826EL1 (rear 12 bays) with two 8087 connectors LSI 2308 SAS2 PCIe 3.0 host adapter with two 8087 connectors (motherboard integrated) 6x SATA ports on the motherboard, running at 6GB SATA3 (DMI 2.0) 2x NVMe slots via Supermicro AOC-SLG3-2M2 add-in card The drives are all WD Red/White 10TB & 8TB models Unraid setup as of 4/27/2019: Data disks x28 x24 connected to BPN-SAS2-846EL1 dual linked to LSI 2308 x4 connected to motherboard SATA ports Parity disk x1 disk connected to motherboard SATA port cache disk x1 NVMe drive connected to AOC-SLG-2M2 Not in use/connected BPN-SAS2-826EL1 (rear 12 bays) x1 SATA port x1 NVMe port Is this the most optimal setup to spread out the utilization and not encounter bottlenecks? Any opportunities for improvement? If I was to daisy chain a single link from the third 8087 connector on the BPN-SAS2-846EL1 to the downstream BPN-SAS2-826EL1 how much of a negative impact on performance would that have? The 826-EL1 would be empty, no disks. Would that reduce PHYs available or add significant additional SAS overhead?
  14. How'd this work out for you? Any steps you found that worked?
  15. You didn't attach diagnostics, nor cite any specific things that didn't work. Only false advertising is your post. I run 2gb just fine, prove me wrong.