Who is running a 2 in 1 server?


Dimtar

Recommended Posts

4 minutes ago, SSD said:

 

10Gbps = 1250 MB/sec - more than fast enough for even the fastest drive.

 

But it does go through network drivers, and not direct SATA connection. But compared to the other end of an actual network connection, it is extremely fast!

oh, so you're actually using 10Gps switches and ethernet cards and the shares are still connected as network shares? (perhaps with drive letters attached to them.)

Link to comment
57 minutes ago, jonathanm said:

They are local on the network. Instead of the network traffic going through a couple ethernet cards and through a switch, it goes through a software switch set up by kvm at 10Gbps speed. Not quite local, but way faster than external over wires. This assumes you are using virtio network drivers.

What's the make/model of the KVM that adds this capability?

Link to comment
29 minutes ago, jonathanm said:

You misunderstand, understandably.

kvm as in https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine,

NOT kvm as in https://en.wikipedia.org/wiki/KVM_switch

It's the virtualization software that unraid uses.

LOL :D

 

ahhh....I see. I read it like you were implying the drives were actually local devices. In a sense they are because everything is hosted on unRAID; but its not true device local to the VM since I/O still has to go through the ethernet protocol to make its way to the disks. However, as SSD pointed out, by using 10Gps equipment it is actually about as fast (if not faster?) than local disk speeds.

Edited by Joseph
Link to comment
9 minutes ago, jonathanm said:

Not sure we're on the same page. There is no physical 10Gbps equipment. It's all virtual in software.

Now I'm thoroughly confused... What am I missing!?!

 

FWIW, I'm not getting great speeds through my VM. Even when I use Midnight Commander via CLI, I'm only getting ~56MBs. I don't really see any at local disk speeds until I do a parity check/rebuild. Then its about 131MBs.

 

 

Link to comment
12 minutes ago, Joseph said:

Now I'm thoroughly confused... What am I missing!?!

 

FWIW, I'm not getting great speeds through my VM. Even when I use Midnight Commander via CLI, I'm only getting ~56MBs. I don't really see any at local disk speeds until I do a parity check/rebuild. Then its about 131MBs.

 

Think of a ramdisk - it looks and acts like a disk, and programs can talk to it like its a disk - but there is no disk. There is only a driver that pretends to be a disk of sectors and cylinders, and simulates it using computer memory. The simulated network interface looks and acts like a network. And other programs interact with it like it is a network, but under the covers everything is happened by directly accessing the disks. There is no real network.

Link to comment
39 minutes ago, SSD said:

 

Think of a ramdisk - it looks and acts like a disk, and programs can talk to it like its a disk - but there is no disk. There is only a driver that pretends to be a disk of sectors and cylinders, and simulates it using computer memory. The simulated network interface looks and acts like a network. And other programs interact with it like it is a network, but under the covers everything is happened by directly accessing the disks. There is no real network.

Are you referring to having your VM on the SSD cache? Of course that would make the VM faster, but would the I/O to the rest of the array be faster for doing that?

Edited by Joseph
Link to comment
12 hours ago, SSD said:

10Gbps = 1250 MB/sec - more than fast enough for even the fastest drive.

I'm not so sure the virtual 10Gbps interfaces performs any shaping to limit the bandwidth to 10Gbps. Having the link specified as 10Gps means that programs that computes metrics for different links will give the virtual interfaces precedence.

Link to comment
11 hours ago, Joseph said:

LOL :D

 

ahhh....I see. I read it like you were implying the drives were actually local devices. In a sense they are because everything is hosted on unRAID; but its not true device local to the VM since I/O still has to go through the ethernet protocol to make its way to the disks. However, as SSD pointed out, by using 10Gps equipment it is actually about as fast (if not faster?) than local disk speeds.

With the virtual machine, the host machine can move disk data into a memory buffer and hand that buffer off to the VM with a single flip of a bit.

 

With a physical machine, the host machine has to move the disk data into a buffer in a network card. Then the physical network card has to serialize the transfer over the cable. And the receiving network card has to deserialize it into a memory buffer. First when all has been received can that buffer be handed off to the client computer.

 

So whatever link speed the virtual interfaces announces, they still remove the serialization step. Disk data gets handed off to the VM almost identically to how the disk data gets handed off to a local program running on the host.

Link to comment
10 minutes ago, 1812 said:

 

to and from the array isn't.

 

to and from the cache can be, especially if it is an ssd.

 

to the ram buffer can also be.

That's part of the problem. My cache SSDs aren't very large, so my VM is stored on an HDD inside the array. Also, I have caching turned off on some shares because I don't want important data to be residing on the cache pool waiting for the mover to kick in during off hours.

Link to comment
43 minutes ago, Joseph said:

my VM is stored on an HDD inside the array.

There's your problem, not part of it, the whole problem. I'd recommend at least moving the vdisk to a drive that is NOT parity protected, doesn't necessarily have to be an SSD, but it would help. Just moving your vdisk image to a spinning drive mounted with unassigned devices would make a huge difference.

 

Data moving from drive to drive both being parity protected array drives is going to be slow, regardless of what is initiating the transfer. Since your VM is on an array drive, ANY array access will be slow, because all drives involved are waiting on the parity drive for any writes.

 

If you feel you must have parity protection for your VM, then speed is the price you pay.

Link to comment

If there is one thing you want on an SSD cache it is the VM image. If there is one thing you don't want on an array disk it is that image. It will be VERY VERY slow. Even installing the image on an UD is much better than that.

 

Even the most modest SSD is more than the size of a disk image. Mine is 70G and more than generous.

 

I have a drive letter mapped to an unRAID share where I keep important documents on the array.

Link to comment

 

 

21 minutes ago, jonathanm said:

If you feel you must have parity protection for your VM, then speed is the price you pay.

 

18 minutes ago, bonienl said:

If you want speed and protection, use a cache pool.

 

17 minutes ago, SSD said:

Even the most modest SSD is more than the size of a disk image. Mine is 70G and more than generous.

 

ok... so the VM is 250Gb. Is there a way to delete non-essential apps/junk to free up space and shrink the physical size (and thus the virtual size) of the VM?

Link to comment
  • 2 weeks later...

Hiya OP, as many have already attested, this is not only viable, but for many of us the perfect solution for our computing needs.

 

If you check out my sig, you'll see that I currently have 2 servers doing exactly what you are talking about. 1 at home and 1 at work (soon to be a 3rd at a new office site). The homeserver lets me do all my gaming (2 VMs worth at the same time), video and 3d work, while at the same time running an (admittedly overpowered) HTPC, plus dockers, plugins and pretty much anything else I want. The office server lets me run a media pc VM that I do all my work on (and sometimes play :P ) while keeping a multi-layered on-site/off-site backup and syncing schedule going for the coworkers. I use the spare cores for testing out new VMs, linux distros, etc. when I have the time and it gives me a really good feel of how they might run on a barebones system. I use both of these servers daily, and with how good unRAID has gotten, all of the awesome plugins/dockers produced by the community; I don't think I would or even could go back to a 1cpu=1pc type of setup.

 

The office is passing through hdmi, mouse, keyboard directly to periferals and is kind of my daily driver.

 

My home setup is a bit more niche, in that its generally headless and I usually stream the VM to my laptop.

 setup 1) direct out via hdmi to tv, mouse and keyboard pushed from laptop to VM via Synergy

 setup 2) VM streamed to old mint181kde laptop, via NoMachine (for desktop, blender, etc.) & Steam In-Home (for gaming)

 

unRAID 6.4 is (for me at least) the highest performance, most stable version of the OS yet. I could even update W10 without the whole 1core work around now. So if you are thinking about pulling the trigger on it, I would. I have 0 regrets. Just, if you don't already have the hardware, it might be a poor time to buy, prices on gpus and memory are RIDICULOUS right now.

 

Thanks my 2 yen, hope it helps!

  • Like 1
Link to comment

My server is in my signature. I run a Linux VM which i use primarily, but i have a Windows VM for times i need Windows and a bunch of other VM's for testing purposes (pfsense, other linux distros, etc.). I have 2 video cards i pass through to the various VM's, so i can only have 2 on at the same time or else i need to use VNC instead. Which works fine for me. I've had this setup for 6+ months and it's worked very well. No big issues.

  • Like 1
Link to comment
On 1/18/2018 at 2:48 PM, SSD said:

I have a drive letter mapped to an unRAID share where I keep important documents on the array.

So I shrank and moved the VM to the cache drive. The boot time is lighting fast, however, there's some mixed results regarding file transfers.

 

Inside the VM Using TeraCopy

Copy from the share back to the share: about 68~70MB/s

Copy from the share to the VM: about 68~70MB/s

Copy from the share to an External USB3.0 HDD on an exclusive bus: about 59~63MB/s

 

Using MC via CLI

Copy from the share back to the share: about 45~50MB/s

Copy from the share to the cache: about 200~210MB/s

 

Parity checks/disk rebuilds

last one ran clocked in at  131.1 MB/s

 

I have dual Ethernet bonded on unRAID 6.4 along with the switch and workstation. Is there any way to achieve faster I/O speeds with VMs and other unRAID tools?

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.