JimPhreak

Members
  • Posts

    723
  • Joined

  • Last visited

Everything posted by JimPhreak

  1. Can I ask your use case? Is there a particular reason you need your VMs in these different VLANs? Are you an IT professional with a home lab environment that needs this kind of segmentation? Just curious on your setup. I am an IT professional who does have a testing environment that I prefer not to have any contact with my main admin network where my servers, networking deviecs, and main PC reside. Need might be a strong word but I don't think I could live with taking my nicely VLAN'd network and moving back to flat network. I do have some specific reasons for using multiple VLANs though outside of my test environment. For example I have one VM that is what I called my "private" VM. It is located in a VLAN that has it's outbound NAT configured (along with some floating rules to prevent any packets from egressing my regular WAN interface) to go out my AirVPN WAN interface. I use the VM for any internet browsing or downloading (not torrenting, I use DelugeVPN for that) I'd like to keep as private as possible. I also have another VLAN that I use for communications between two sites (my main home and my vacation home where I have my backup UnRAID server located) and thus my backup VM is located on that VLAN. I'm sure I can reconfigure this but it works well as is. I have other VLANs as well (wireless, etc.) but those are not associated with any VMs.
  2. I realize this is a 1-year old thread but I'm thinking of doing just what you did in switching from ESXi 5 to KVM in UnRAID and I have 4 NICs I'd like to segment similar to what you were trying to do here. How did you ever make out with this?
  3. I'm giving serious consideration to dropping VMware ESXi, converting my VMDK's for use in KVM, and making my UnRAID server bare metal again. However I have a question about VLAN support in KVM because I currently have multiple VM's in different VLANS then my UnRAID server is in. Now in ESXi I have a different VM network assigned for each VLAN and can just assign the VM network (and thus the corresponding VLAN) to each VM I want to use it with. Is there any way to do something similar in KVM on UnRAID?
  4. Why can't you move some of your spinners to the HBA so the SSD's can have the higher speed motherboard ports? First off, both the M1015 and my SATA on-board ports are rated at 6GB/s so I don't really see any benefit to moving them. But even if I did want to I can't because my UnRAID server is in a VM and all the drives are connected to my controller which is passed through to the VM via Direct-I/O passthrough. And there you might have the problem. Could it not be possible that it's the pass through that is the problem? I would test unraid barebone just to be sure it's not a pass through problem. Hmmmm, just did a test copy (of the same data) from a Windows VM on the same VM server to UnRAID and I got 112MB/s the whole way. Two things stand out to me about this. First off I'm not sure why my transfer was capped at 1Gbps if both VM's are using VMX3 drivers and therefor theoretically linked at 10Gbps. Secondly, why did this transfer stay consistently at 112MB/s compared to the transfer between my physical PC and my UnRAID server. I'm going to need to physically wire my laptop and do some testing from that.
  5. Why can't you move some of your spinners to the HBA so the SSD's can have the higher speed motherboard ports? First off, both the M1015 and my SATA on-board ports are rated at 6GB/s so I don't really see any benefit to moving them. But even if I did want to I can't because my UnRAID server is in a VM and all the drives are connected to my controller which is passed through to the VM via Direct-I/O passthrough.
  6. That really frustrates me. Especially given the fact that I'm using four "prosumer" SDDs in the Intel 730's. I don't know what to try from here. I remember someone seeing a big improvement changing controller, from hba to onboard or the other way around. Also make sure you trim your pool regularly. I trim my pool regularly but I have no option to move to onboard as I have 12 drives and 6 on board SATA ports. I guess I could try updating the firmware on my M1015 but the last thing I want to do is brick the controller as having my array down for extended periods of time is a no no.
  7. That really frustrates me. Especially given the fact that I'm using four "prosumer" SDDs in the Intel 730's. I don't know what to try from here.
  8. So I converted my BTRFS cache pool from RAID1 to RAID10 and while I did see some speed improvements, it's still not where I'd like it to be. With the pool on RAID1 I tried to transfer 130GB comprised of 46 different video files. After about 15 seconds of saturating my gigabit connection the speed dipped to between 30-40MB/s for the remainder of the transfer. After converting to RAID10, I get peaks and valleys. The transfer speed will stay above 100MB/s for 2-3 files and then dip down to 30-40MB/s for 2-3 files and then back up and then back down again. I know this issue is a cache write issue because when I copy the exact same files back from the cache pool the speed is 112+MB/s for the entire transfer. I don't know what else to do or if this is just the way it is when transferring multiple large files.
  9. Can I change the btrfs balance from raid1 to raid10 on my cache pool without losing the data currently on it?
  10. How on earth can a 4-disk (all Intel 730 SSD's) cache pool not be able to write beyond 40MB/s? No matter what type of file I try to transfer or what share (or directly to the cache) I copy to, the transfer will start out saturating 1Gbps and then after about 10-20 seconds it stays somewhere between 30-40MB/s for the remainder of the transfer. Something is not adding up. With this hardware the speed's should not be this slow. EDIT: Any chance the bottleneck is my M1015 controller? I have 12 drives hooked up to it.
  11. Yes, I had 2 drives in a cache pool and just shut down the array, assigned the two new drives to the cache pool, and once I restarted the array all the data was there.
  12. If you wanted to save a little bit of money you could buy a 2 port HBA + SAS expander to connect 16 drives for about half the price. That's what I've got in my server.
  13. Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate. I do have them hard bonded in ESXi. Just figured there might be some way for me to take advantage of having a second NIC inside my unRAID configuration to maybe get some added bandwidth. Communication like that will only go through the cpu so it dosent mather. You can install a speed test program in your vm´s and test inter-communication to see what i mean, a good program for benchmarking is "iperf". but in your case you would install iperf on a networked pc that is "bonded" and in unraid(or a vm in unraid) and you would moast allready see the speeds of the bond in esxi (i guess 2gbits). Ahhh so you're saying even though I only have one "NIC" in unRAID, since it's virtual (and using the vmx drivers which are capable of 10gig) that I'd still get max throughput based on the amount of physical bandwidth coming into my ESXi server.
  14. Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate. I do have them hard bonded in ESXi. Just figured there might be some way for me to take advantage of having a second NIC inside my unRAID configuration to maybe get some added bandwidth.
  15. Is there a way to schedule regular scrubs of one's btrfs cache pool?
  16. Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong?
  17. I want to add a second vNIC to my unRAID VM in ESXi 5.5 and was wondering if there are any "gotchas" before I go ahead and do it and bond the adapters in unRAID. Last thing I want is to be unable to remotely access my server.
  18. When the idea for dual parity was first brought I was super interested in it. However, now that I have 8TB drives and I have a second server (that mirrors the first for backups) I personally don't think I'll ever move to dual parity. For those that don't have proper back-ups I completely understand the need/desire for it though. And since (from what I've personally witnessed) most people who run home servers don't seem to have proper backups I imagine this will be a widely popular feature.
  19. Has anyone gotten Madsonic working through apache reverse proxy? I'm wondering if there is something inside the Madsonic config file that needs to be edited as simply putting a reference to it in my default.conf isn't working on its own.
  20. Jim been thinking about this, only things I could come up with are Are you using the dev branch Assuming you are using the dev branch I think it must be an Apache issue - Have you got a defined web root? Not sure what you mean by a defined web root in apache. Below is my PlexRequests docker config.
  21. Just checked on mine and it takes me to https://mydomain.com Hmmm, I wonder what could be different. I made the changes as you outlined in my docker settings and my default.conf file in apache concerning plexrequests is identical to yours as well.
  22. This fix works great thanks so much. However there is one issue with it. It doesn't change the link for the Plex Requests hyperlink in the upper left hand corner of the home page. So when you click on that, it tries to redirect to http://UNRAIDIP:3000/ which brings up an unknown path.
  23. Gotcha. Now that I've got 4 x 480GB SSDs in my pool I should be good keeping the data in my pool.
  24. You have multiple cache drives that are not in the same pool?