Jump to content

Lev

Members
  • Content Count

    295
  • Joined

  • Last visited

  • Days Won

    4

Lev last won the day on April 13 2018

Lev had the most liked content!

Community Reputation

61 Good

About Lev

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Personal Text
    Supermicro X9, 128GB ECC DDR3 @ 1600Mhz, 90TB (2x 6TB WD, 8x 8TB WD), 1x LSI9308 Cache: 2TB Crucial SSD

Recent Profile Visitors

939 profile views
  1. We're both running into the same wall. Glad I knew I wasn't the first, otherwise I would of kept running into thinking I wasn't doing something right. I appreciate your detailed posts in this thread. Same, seems from all our testing it's the only method that works reliably and has expected performance. That's one approach, I have another suggestion. Might be more reliable to use a common linux command (iostat) to query disk activity on the parity or plex disk to see if it's under heavy load. It has a few advantages to using the hidden file check method... more common tools agnostic to the application, and you can define a granular definition of 'heavy load' is to each applications use case.
  2. I've got this working but I'd like to reverse the example to share a VM filesystem and then mount on my Unraid host using VirtIO. Is this possible somehow?
  3. I've run into this too and I must say I'm very thankful you forewarned about it so I knew in advance. I'd really like to centralize everything around the bare metal server host so that the VMs are just autonomous and require no access to make everything easier to manage. Here's what I've tried and what the results were that are centered on my use case... initiating all transfers from the host and copying to the target VM. VirtFS 9p Virtio All my research pointed to VirtFS or Virtio FS as the optimal (least overhead way) to share a host file system to a VM. @eschultz has made some great posts about it. It has some limitations it appears... Performance of host disk to VM guest disk is about 38 to 40 MB/sec on average. I ran some iperf3 tests that @bonienl demonstrated and confirmed I get the same results so there appears to be a fast communication layer from the VM guest to the host. It only works within the VM, so it doesn't work for my use case. NFS (via UnAssigned Devices) While I was about to mount on my host the NFS export on my VM I found that the nfs exports are easy to overwrite and restart in whatever my usual actions are in the GUI. It was hard to keep this mounted, however when I did stop tinkering with things and tested speed, large file transfers from the Host to the VM would fail after a few minutes. SMB Large file transfer initiated at the host copying to the VM all fail after a minute or two just like you described @hawihoney. Transfers initiated at the VM and copy from the host's SMB share work stable and at full disk performance, roughly 170 MB/s So I still haven't found a method yet to initiate a file copy at the host and push out to VM Unraid guests. Any suggestions I'd love to try some new approaches, or if there is a better work flow to consider, please share.
  4. I'm curious to hear an update on this. Perhaps there's a entertaining story to be told from behind the scenes where our heroes @OmgImAlexis & @limetech ventured forward 🏇 to enhance this feature but ran into some unforeseen challenges along the way.... 🐙🤺
  5. Thanks for the details @hawihoney on the smb mounts. I tried a different approach, to not have all the overheard of SMB and less pitfalls you encountered using that networking method by instead using the VirtFS. This passes thru the host bare metal server filesystem to the VM enabling me to use Midnight Commander or Krusader within the Unraid VM and manage everything all from one place. Here's a great whitepaper I found, perhaps you'll find it interesting too: https://www.kernel.org/doc/ols/2010/ols2010-pages-109-120.pdf However if I have a second Unraid VM and another JBOD like you do, then this method doesn't scale. I could either use the SMB method like you did, to share the VMs back to the bare metal host. I have more research to do, here's the questions I'm thinking of in regards to finding a solution with VirtFS. It would be great if the bare metal host could see the VM filesystem thru VirtFS, is that possible? Is it possible for VMs to access and mount each other's file systems through VirtFS?
  6. I've been following the same approach you posted. I just got step #6 working tonight. I don't understand step #7, can you explain more it's purpose and what it achieves? My situation at the moment is my VM can see all my hosts drives using the VirtFS direct mount, so that's good. However I'd prefer to have it work in reserve so that my bare metal host and mount the guest VM's filesystem using VirtFS.
  7. @johnnie.black this is a tremendous resource and was interested in your thoughts on what would be the expected speeds to determine the optimal configuration for a for 30+ drives in the following configuration: SuperMicro SuperStorage 6047R-E1R36L configured as: BPN-SAS2-846EL1 (front 24 bays) with three 8087 connectors BPN-SAS2-826EL1 (rear 12 bays) with two 8087 connectors LSI 2308 SAS2 PCIe 3.0 host adapter with two 8087 connectors (motherboard integrated) 6x SATA ports on the motherboard, running at 6GB SATA3 (DMI 2.0) 2x NVMe slots via Supermicro AOC-SLG3-2M2 add-in card The drives are all WD Red/White 10TB & 8TB models Unraid setup as of 4/27/2019: Data disks x28 x24 connected to BPN-SAS2-846EL1 dual linked to LSI 2308 x4 connected to motherboard SATA ports Parity disk x1 disk connected to motherboard SATA port cache disk x1 NVMe drive connected to AOC-SLG-2M2 Not in use/connected BPN-SAS2-826EL1 (rear 12 bays) x1 SATA port x1 NVMe port Is this the most optimal setup to spread out the utilization and not encounter bottlenecks? Any opportunities for improvement? If I was to daisy chain a single link from the third 8087 connector on the BPN-SAS2-846EL1 to the downstream BPN-SAS2-826EL1 how much of a negative impact on performance would that have? The 826-EL1 would be empty, no disks. Would that reduce PHYs available or add significant additional SAS overhead?
  8. How'd this work out for you? Any steps you found that worked?
  9. You didn't attach diagnostics, nor cite any specific things that didn't work. Only false advertising is your post. I run 2gb just fine, prove me wrong.
  10. I was reading through this thread today to find an answer to something (I found it) and just wanted to say thanks again @dlandon for all you do. Hard to count the ways your plug-in has been a tremendous addition to the community.
  11. Thanks for looking into this, much appreciated @Squid and @bonienl. I had encountered this too, but thought I had something configured wrong. Glad you wrote it up @nuhll 😃
  12. That was the only work-around I found too. If anyone is curious what my use case was... my UnRAID server is acting as client only. I disabled SMB, NFS and anything else I didn't need, less things to have to secure.
  13. Yes I have seen something very close to this. I disabled smb and dashboard exhibited the same behavior you posted .
  14. Upgraded from rc6 last night, no issues to report. I also noticed the long pause others have reported, but wouldn't of thought to mention it.
  15. Found the answer I was looking for. Thanks @johnnie.black