Jump to content
joelones

[GUIDE] Virtualizing unRAID on Proxmox 3.1

22 posts in this topic Last Reply

Recommended Posts

I just wanted to report back to the OP (from http://lime-technology.com/forum/index.php?topic=31257.0), and confirm that indeed proxmox 3.1 works with grumpybutfun's unRAID vhd file (from http://lime-technology.com/forum/index.php?topic=30715.0). I guess no reason why it wouldn't as proxmox uses KVM. In any case, I decided to try proxmox because of the overall smaller footprint compared to the opensuse's graphical installation, as well as better suited for a headless server install. As the underlying OS (debian) is installed for you as part of the install, I found the whole install to be relatively painless.

 

Obviously special thanks goes out to user grumpybutfun for creating the unRAID vhd file in the first place.

 

This is a quick & dirty guide and not tremendously thorough and I assumed that the user is able to copy files via scp or winscp and able to use vi or nano. All in all, the proxmox Web UI is not as functional as ESXi's or Virt-Manager for that matter, so some command line editing skills are necessary to change the configuration file.

 

I am also using an AMD cpu so some things may not entirely follow for intel users, I'll try to indicate those differences.

 

Installation:

 

Download the Proxmox VE 3.1 ISO Installer and write it to a USB flash drive, you can google the download link. Because I used Windows, I used SUSE Studio ImageWriter from https://github.com/downloads/openSUSE/kiwi/ImageWriter.exe to write the iso to the usb. Just make sure your USB contains no partitions. I deleted all the partitions on my usb with the Home version of Minitool Partition Wizard, again you can google the link.

 

The installation is very straight forward and consists of selecting your timezone, installation disk (see partitioning below), ip address and setting a root pasword.

 

See here for a complete guide: http://woodel.com/my-linux-how-to/proxmox_vidz/vid1/

 

Partitioning (Hard Way):

 

After initial installation I did the following:

 

I resized the Logical Volume's that were initially created by the proxmox installer, specially in the PVE volume group. I didn't like how it partitioned my 120GB SSD. So I booted up with GParted (http://gparted.org/download.php) and opted to work in the terminal as GParted is unable to alter Logical Volumes (at least it was in my case with the lastest gparted-live-0.17.0-4-i486.iso.

 

By default, the proxmox installion creates:

 

/dev/pve/root 
/dev/pve/swap
/dev/pve/data 

 

In short I did the following from a terminal window while booted from the GParted iso:

vgchange -a y (make the LVM disk visible to the kernel)
e2fsck -f /dev/pve/root (check the filesystem)
resize2fs /dev/pve/root 8G (shrink root to 8G)
resize2fs /dev/pve/swap 2G (shrink swap to 2G)
e2fsck -f /dev/pve/data (check the filesystem)
lvextend -L+40G /dev/pve/data (expand data with an addition 40G of space)
resize2fs /dev/pve/data (resize data - where VMs go - occupying the rest of Volume Group)

Leaving about 5.50 G in the VG for snapshots. I still haven't messed around with snapshots yet, so I'm not sure if it's too little.

 

Partitioning (Easy Way):

 

If you want to simplify your life and save some time when it comes to partitioning, you can pass arguments to the boot: prompt right before booting and avoid the above.

 

fLZGYHR.png

 

The above example linux ext4 maxroot=10 swapsize=20 configures the partition filesystem to ext4 (ext3 is the default), creates a root partition of 10GB and swapsize of 20GB.

 

Some more options:

linux ext4 – sets the partition format to ext4. The default is ext3.

hdsize=nGB – this sets the total amount of hard disk to use for the Proxmox installation. This should be smaller than your disk size.

maxroot=nGB – sets the maximum size to use for the root partition. This is the max size so if the disk is too small, the partition may be smaller than this.

swapsize=nGB – sets the swap partition size in gigabytes.

maxvz-nGB – sets the maximum size in gigabytes that the data partition will be. Again, this is similar to maxroot and the final partition size may be smaller.

minfree=nGB – sets the amount of free space to remain on the disk after the Proxmox instillation.

 

Install webmin (Optional):

 

Download webmin from http://www.webmin.com/download.html

 

Copy it via scp or winscp and install with the following from a ssh shell logged in as root:

dpkg -i webmin_1.670_all.deb

 

If the above complains about missing dependencies, execute the following:

 

apt-get -f install

 

Prior to that you'd probably want to run the following to update your system:

 

apt-get update
apt-get upgrade

 

Log in with your root password, as created during the installation process, at (https://IP:10000) to administer your proxmox server.

 

Make sure iommu is activated:

 

Log into a ssh shell as root, if not done so already and do the following:

 

vi /etc/default/grub

Change:

GRUB_CMDLINE_LINUX_DEFAULT="quiet"

To:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"

I believe for intel it's:

intel_iommu=on

Then:

update-grub
echo "options kvm allow_unsafe_assigned_interrupts=1" > /etc/modprobe.d/kvm_iommu_map_guest.conf
reboot

 

Install an unRAID VM:

 

Create your initial VM for unRAID at https://IP:8006, of course log in as root. You will find the Create VM button at the top right corner. I'll let you follow the wizard. This is how I configured mine:

 

HfQKGVv.png

aBoCc38.png

 

Copy over grumpybutfun's unRAID vhd file via scp or winscp to:

/var/lib/vz/images/100

 

Here, 100 is a Virtual Machine ID and stands for the first VM created in my case, it may be different for you. I also deleted the .qcow2 file and the line in the conf file representing the disk as it was created by the wizard. I renamed the unRAID vhd file to vm-100-disk-1.raw with the command line, not sure if that's necessary though.

 

Then I modified the configuration in:

/etc/pve/qemu-server/100.conf

 

Compare yours with the following: The net0 (NIC mac address), usb0 (unRAID usb), hostpci0 (HBA) and cpu lines may be different in your case.

 

bootdisk: virtio0
cores: 1
cpu: Opteron_G5
hostpci0: 02:00.0
memory: 1024
name: unRAID
virtio0: local:100/vm-100-disk-1.raw
net0: virtio=56:74:22:0A:65:22,bridge=vmbr0
ostype: l26
sockets: 1
usb0: host=0930:6545

 

Configure Passthrough:

 

Note: The hostpci0 line (device passthrough) is for my M1015, which can be found via lspci. This is necessary if you are passing in a HBA/RAID card for unRAID to use. The usb0 line is for the unRAID usb (which must have a label of UNRAID and is mandatory to get unRAID booting correctly). The usb address could be found via lsusb.

 

lsusb in my case:

Bus 003 Device 002: ID 0930:6545 Toshiba Corp. Kingston DataTraveler 102 Flash Drive / HEMA Flash Drive 2 GB / PNY Attache 4GB Stick

 

lcpci in my case:

02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)

 

Note for Intel Users: Change your cpu from Opteron_G5 to the appropriate setting if not running an AMD processor. This can actually be done via the Web UI.

 

Start unRAID:

 

Click on your unRAID VM and hit the Console button on the far right (not shown on the image below), then the Start button (you will need Java installed for the console to work on your system).

 

yzqGbnO.png

 

If everything works out, you should see the following:

 

zAWRzTk.png

 

Disclaimer:

 

I don't know what I'm doing so I'm sure there's something I'm missing. But it works.

 

Licensing:

 

There's a way to get rid of the nag in 3.1, but I digress. What I'm not sure about it is the ongoing support for non-subscription based installations as the enterprise repository requires a license to update from. You can uncomment the line from /etc/apt/sources.list.d/pve-enterprise.list to ensure that apt-get update doesn't fail. All updates will be from the non-subscription repository which are publicly available I assume.

 

Additional Resources:

 

Also. Here is a tutorial series I found on proxmox for noobs like me: http://woodel.com/my-linux-how-to/proxmox_vidz/

 

Good luck!

Share this post


Link to post

Wow, this is everything I was looking for, and then some! Thanks :D

 

I'll try it out when I have time this weekend and let you know how it goes.

Share this post


Link to post

Wow, this is everything I was looking for, and then some! Thanks :D

 

I'll try it out when I have time this weekend and let you know how it goes.

 

Great Guide and great alternative solution.

 

Share this post


Link to post

This is a great guide.  I used it last night as I transitioned from ESXi to ProxMox.  Thanks for putting it together.  As an additional comment, I had an error with the m1015 which prevented it from booting, so I had to change some settings in the config file.

 

The changed settings are as stated:

machine: q35

hostpci0: xx:xx.0,driver=vfio,pcie=1,rombar=off

 

Per the ProxMox forum, this issue only occurs when using firmware 15, and does not occur when using firmware 17, however I have no knowledge of this as I did not flash mine to the newest firmware.

Share this post


Link to post

Make sure iommu is activated:

 

Log into a ssh shell as root, if not done so already and do the following:

 

vi /etc/default/grub

Change:

GRUB_CMDLINE_LINUX_DEFAULT="quiet"

To:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"

I believe for intel it's:

intel_iommu=on

Then:

update-grub
echo "options kvm allow_unsafe_assigned_interrupts=1" > /etc/modprobe.d/kvm_iommu_map_guest.conf
reboot

 

This guide is perfect as I'm going to test out unRAID on Proxmox (first time user) this weekend.

 

Quick question on the above though.  How does one actually edit the grub.cfg file while SSH'd in from Putty?  I can move the cursor around with the arrow keys but I can't make any changes to the file.

 

Pardon me for being a noob at this.

Share this post


Link to post

I'm going to be testing my server with Proxmox and Hyper-V as an alternative to ESXi this week.  Already have Proxmox setup so hopefully I'll have some time to mess with it tonight/tomorrow.

Share this post


Link to post

I'm going to be testing my server with Proxmox and Hyper-V as an alternative to ESXi this week.  Already have Proxmox setup so hopefully I'll have some time to mess with it tonight/tomorrow.

 

How are you enjoying Proxmox over ESXi?

Share this post


Link to post

I'm going to be testing my server with Proxmox and Hyper-V as an alternative to ESXi this week.  Already have Proxmox setup so hopefully I'll have some time to mess with it tonight/tomorrow.

 

How are you enjoying Proxmox over ESXi?

 

I haven't gotten to the point of testing Proxmox yet.  I've been testing Hyper-V so far this week and that seems to be winding down since you can't pass through usb flash drives to Hyper-V guests and you can't install unRAID onto a SATA DOM SSD.  So I'll probably start testing Proxmox this weekend or early next week.

Share this post


Link to post

Curious to see if you ever got around to proxmox?

 

 

- NinthWalker

 

I fooled with it for a few days but wound up going back to ESXi.  I just like the management interface of the vSphere client much more than the Proxmox management options that are out there.  I'd play around with Proxmox more in a test environment but for my home server I needed to get things up more quickly for functionality and I've been using VMware for years so I just didn't have the time to learn a new hypervisor like I wanted.

Share this post


Link to post

This is a great guide.  I used it last night as I transitioned from ESXi to ProxMox.  Thanks for putting it together.  As an additional comment, I had an error with the m1015 which prevented it from booting, so I had to change some settings in the config file.

 

The changed settings are as stated:

machine: q35

hostpci0: xx:xx.0,driver=vfio,pcie=1,rombar=off

 

Per the ProxMox forum, this issue only occurs when using firmware 15, and does not occur when using firmware 17, however I have no knowledge of this as I did not flash mine to the newest firmware.

 

Has your installation been stable? I ran into a lot of reiserfs errors when using passtrough (1x saslp and 1x m1015)

Share this post


Link to post

just to give back a little to the wonderful community - here a few mumblings...

 

i'm using ProxMox 4.0 as native hypervisor on my machine and have unRAID 6.1.4 Pro running as guest. everything works (what i tested) perfect so far (*). anyway, to let you know beforehand: ProxMox doesn't passthrough virtualization features to guests (so i guess no virtualization inside a virtualized [unRAID] environment). dockers, of course, do work as expected.

 

setup: kvm, 4 gb ram, passthrough'ed SAS3 controller and 1x usb-port

boot-environment: Plopexec boots from usb-stick with installed unRAID (updateable, as like native)

 

small hickups at first:

SAS3-controller had not been with latest firmware at first. everything worked, but not the preclear plug-in. it showed a second line which should have been acknowledged with yes, but nothing happened. after updating the firmware to the latest one, even preclear worked fabulous. pre-clearing a 6 TB WD Red Pro took 34 hrs (Pre-Read 12,25 hrs + Zeroing 9,5 hrs + (Fast)Post-Read 12,25 hrs.).

 

docker: plex (linuxserver.io) [works with Plex Pass]

works pretty well so far, updates do get installed (restart docker once an update is available). i've taken over my big library (Mac based) and it worked immediately.

 

network:

performance is pretty good; i don't think there's a big loss of throughput 'cause of ProxMox. The cache SSD isn't used for copy-through-share, it's now just for dockers/data storage.

 

reliability:

after the installation (and setup specialities because of virtualizing of unRAID), i had not one problem! no crash, no slowdowns, all features work (spin down of disks, etc.). for example i copied alot of data over the network (quite a few days in one go) – i'm pretty happy here. my server gets alot of other VMs later on and needs to be independently from unRAID (no good to shutdown VMs cause of changes to the array, etc.).

 

 

* [solved] unRAIDs dashboard believes eth0 isn't connected. and secondly a slowly, but steadily increasing number of reported dropped packets. both doesn't seems to lead to reduced networking speed, or connection problems/discovery. maybe it's related to ProxMox's network bridging  setup.

it was because i (automatically) used virtio also for the network - now switched that to an E1000 and no (not one) dropped packet. also the dashboard now sees the eth0.

Share this post


Link to post

As far as i can read, you guys never got into details of using raid0/1/5/6/10 with your disk, all i can see is that some ppl tried to pass through some SATA controllers.

Did you used a certain 300G disk from pve for unraid, or did all of you pass a controll or certain disks through?

What about raidlevel on pve?

Can you guys sure some details/insight on this?

I really want to follow this way, for using pve as a base, i love it since years. But i also want to use unraid as the nas for my home and esxi to evaluate and play around insde pve.

Any tips or insight on this?

Share this post


Link to post
Posted (edited)
5 hours ago, jammsen said:

As far as i can read, you guys never got into details of using raid0/1/5/6/10 with your disk, all i can see is that some ppl tried to pass through some SATA controllers.

Did you used a certain 300G disk from pve for unraid, or did all of you pass a controll or certain disks through?

What about raidlevel on pve?

Can you guys sure some details/insight on this?

I really want to follow this way, for using pve as a base, i love it since years. But i also want to use unraid as the nas for my home and esxi to evaluate and play around insde pve.

Any tips or insight on this?

 

ok, meanwhile i switched to unRAID as native boot OS (so i've all virtualization features available). but, if you don't need them inside unRAID – because you do them on Proxmox – then i probably would go again the route i've taken back then.

 

there are a few things to think about:

a) using one of the most cool features of unRAID is, being not a RAID x with all disks spinning all the time – see at the name "un" & "RAID" – which leads to the necessity, that unRAID has the only hand on these disks (and/or the controller where they are connected to) which need to be spun down. passing disks via PVE's into any VM (also unRAID) doesn't let it spin down these disks, because PVE never spins them down on it's own. so i passed through a whole host adapter with all disks connected to it, which should be only available in the unRAID universe. and so all features related to these disks worked as expected.

 

b) plug-ins / dockers

they worked inside unRAID as expected (well, at least all stuff which i tested, but back then, i hadn't got any hardware access from dockers like for example a TV card for TVHeadend). but i would expect them to work, because if they are (hardware)-passed through like the HBA then why not. the only additional layer of (maybe) problems could arise because of the bootloader (plopkexec) in front of unRAIDs kernel – well, it worked perfectly with my HBA, YMMV.

 

c) PVE

on the PVE side of things you can do whatever you like (because all disks related to unRAID are/and should be passed through via hardware into the VM). so go ahead with ZFS (if you got enough RAM) or any other kind of RAID which is supported.

 

d) bootloader vs. boot image

Proxmox doesn't boot a VM from a USB-stick directly (which is unRAIDs way and it needs this USB sticks UUID for verifying your license anyway).

the bootloader approach (plopkexec) has the advantage, that the unRAID VM will eventually be booted from the USB stick (PVE boots the bootloader and that in turn loads unRAID from the USB stick). and because of this, unRAID sees no difference in it's boot device and all operations are done like traditionally when unRAID booted natively from this USB stick (all updates, changes, etc. are written directly to the USB stick and are available with the next boot).

doing a VM disk image approach works too (i read back then about it), but has the disadvantage that people needed to manually copy changes made to the VM disk image back to the USB stick, before booting again.

 

i would choose Proxmox again, over any other hypervisor.

 

alright, hope this helps in decision making.

Edited by s.Oliver

Share this post


Link to post
Posted (edited)
4 hours ago, s.Oliver said:

 

ok, meanwhile i switched to unRAID as native boot OS (so i've all virtualization features available). but, if you don't need them inside unRAID – because you do them on Proxmox – then i probably would go again the route i've taken back then.

 

there are a few things to think about:

a) using one of the most cool features of unRAID is, being not a RAID x with all disks spinning all the time – see at the name "un" & "RAID" – which leads to the necessity, that unRAID has the only hand on these disks (and/or the controller where they are connected to) which need to be spun down. passing disks via PVE's into any VM (also unRAID) doesn't let it spin down these disks, because PVE never spins them down on it's own. so i passed through a whole host adapter with all disks connected to it, which should be only available in the unRAID universe. and so all features related to these disks worked as expected.

 

b) plug-ins / dockers

they worked inside unRAID as expected (well, at least all stuff which i tested, but back then, i hadn't got any hardware access from dockers like for example a TV card for TVHeadend). but i would expect them to work, because if they are (hardware)-passed through like the HBA then why not. the only additional layer of (maybe) problems could arise because of the bootloader (plopkexec) in front of unRAIDs kernel – well, it worked perfectly with my HBA, YMMV.

 

c) PVE

on the PVE side of things you can do whatever you like (because all disks related to unRAID are/and should be passed through via hardware into the VM). so go ahead with ZFS (if you got enough RAM) or any other kind of RAID which is supported.

 

d) bootloader vs. boot image

Proxmox doesn't boot a VM from a USB-stick directly (which is unRAIDs way and it needs this USB sticks UUID for verifying your license anyway).

the bootloader approach (plopkexec) has the advantage, that the unRAID VM will eventually be booted from the USB stick (PVE boots the bootloader and that in turn loads unRAID from the USB stick). and because of this, unRAID sees no difference in it's boot device and all operations are done like traditionally when unRAID booted natively from this USB stick (all updates, changes, etc. are written directly to the USB stick and are available with the next boot).

doing a VM disk image approach works too (i read back then about it), but has the disadvantage that people needed to manually copy changes made to the VM disk image back to the USB stick, before booting again.

 

i would choose Proxmox again, over any other hypervisor.

 

alright, hope this helps in decision making.

Well my new ordered hardware wont run so i have to through RMA and stuff sadly.

 

Im now using my old dual xeon (4c/4t each) with ddr1 ecc 24G ram, which is enough for playing.

BUT on the other hand i have 8 disks (7x1TB+1x2TB) connected to a HW raidcontroller (Adaptec ASR-5805 i think) which just lets me use all disks, while my mainboard has not enough ports.

As you said, on one hand its "un"+"raid", BUT on other hand it isnt a Raid6 with 600% read speeds, which is why im going for first testings with Debian9,5 and PVE and virtualize all the things including unraid, or do you see another way? Cause im not really down to do a SW raid via Debian installer and just use my hw rcard as JBOT, EXCEPT there is a really good and solid benefit. Can you think of one?  Any suggestions / feedback for me? This is a poc for me, im try it right now out, so please feel free to go crazy :)

Edited by jammsen

Share this post


Link to post
14 minutes ago, jammsen said:

on other hand it isnt a Raid6 with 600% read speeds


That depends on what reads you are performing. No striping, so no transfer speedup when reading a single file. But having multiple independent file systems still means you can concurrently read from all data disks and perform independent seeks on the data disks. If you need to read multiple larger files on RAID-6, then you still get a significant slowdown from all the disk seeks. All disks needs to seek to retrieve a stripe block of file A. Then seek to retrieve a stripe block of file B. Then seek to retrieve a stripe block of file C. Then back to A again.

Share this post


Link to post
Posted (edited)
6 hours ago, jammsen said:

As you said, on one hand its "un"+"raid", BUT on other hand it isnt a Raid6 with 600% read speeds, which is why im going for first testings with Debian9,5 and PVE and virtualize all the things including unraid, or do you see another way? Cause im not really down to do a SW raid via Debian installer and just use my hw rcard as JBOT, EXCEPT there is a really good and solid benefit. Can you think of one?  Any suggestions / feedback for me? This is a poc for me, im try it right now out, so please feel free to go crazy :)

 

what's your use case? why the need for crazy read speeds (which deeply depend on how well in continues blocks they have been written before)?

ZFS as the worlds safest file system uses only software and needs to have direct hardware access to the drives (so no RAID controller in between). well, i experienced already problems with RAID controllers (mostly in RAID enclosures), but anyway, whenever possible i don't go for hardware RAID x anymore.

 

and well, in the worst cases of all cases, where more drives die as your RAID config can deal with – in unRAIDs case all other drives have their data intact, because all of them have their regular file system (xfs), which can be read from any linux.

 

unRAID has it's disadvantages too, no doubt. it's not meant to be an ultrafast NAS/RAID in terms of disk throughput. but it can be an ultra cool NAS, with a twist. you can tune on different areas (like using cache drives), you can passthrough independent drives (outside the array) to VMs, etc. alot of stuff which keeps this solution ultra flexible. adding drives of different sizes to the array and more. and it can keep heat and energy usage low, because of it's dont't use, don't spin approach (but you decide, don't want to wait on a drive which needs to spin up to access files, ok, don't let it spin down – it's just a setting, per disk if you want). let's think of an RAID controller failure – now you have a pretty hard time ahead (worst case, you can't get the same card and the new one can't read the RAID signature from your drives, oops).

 

so, think about a minute, what you want to do with your rig. you want to game on it – you can with unRAID (me and friends do it). you want to use it as a daily mainstream workstation, you can. having stuff like a plex-server, tv-recorder or both and more – you can. no need to have that 8 drive hardware based RAID 6 for that. buy one good SSD or NVMe as cache drive and your set. and whenever you need to move to new hardware, it's super easy with unRAID – take your USB stick, plug your disks to your SATA ports, or HBA card and boot from USB. there's no such thing as will it recognize my RAID 6 from the previous controller card, or whatever. i've done it several times, it's too cool how easy that is.

 

and btw. the less drives do spin, the quieter your rig gets. so 8 drives all time vs. 0 drive(s) in best case (example: Windows VM with GPU passthrough in gaming session).

 

what do you want to do with it?

 

Edited by s.Oliver

Share this post


Link to post

Thanks pwm and s.Oliver:

As both of you asked, whats my scenario and endgoal, i think i have to explain a little (Just to be clear im using unRAID over a year and i know a fair bit about usage and features, but i wouldnt consider myself a pro) (I did use unRAID on my Gaming rig as a GamingVM in a NAS approach, again with the WD disks a few months ago, but i really frustrated me having massive IO and audio issues all the time, even after spending the overprize on a X370 board vs a b350 for better architecture of asus boards and better IOMMU groups):

 

For now its just performance evaluation, but the longterm goal is to reduce my rented pretty powerful dedicated root which comes about 100€ a month, on which i do everything, webhosting, email, teamspeak3, ipfire, docker, gameserver hosting of a multitude of games, evaluating bleeding edge stuff and a few things more, all done mainly by Proxmox, because of the free choice of VMs or OpenVZ containers or if i want docker containers.

 

Later that root (100€) server should go and the critical stuff should be hold there on a 20-30ish€ server per month(web+mail, maybe teamspeak), everything else i want to selfhost from my basement, my internet connection is high and good enough for that( as i said that stuff is not critical, so a downtime is okay if it happens). As i said my old HW for my old unRAID NAS is a dual Xeon L5420 (4c4t) and 24GB ECC DDR 1 RAM, plus a few used bought disks, thats why i want to go for a raid6, because of the multitude of different lifetimes and the possibilty of dying every minute. The disks are 2x 1TB WD Caviar Black 1TB (7,2k rpm) about 5 years power on time, 1x2TB WD Green (5,4k rpm) 5 years too i think, both of them are bought new and are my own. Then i shot 5x1TB Samsung Spinpoint SATA2 (7,2k rpm) for about 100€ used on ebay, no clue of SMART data of any of them, of which 1 just yesterday hicked up again, which basically means its dead for usage and not reliable anymore. Again my old system can about just handle the speeds required for the disks, the 2 pci-e lanes are just the ver1 standard and im using the x4 connector because of a chipset headspreader blocking the x8 port. All 3 WD support SATA3 on paper, but lets be real, the WB Green SATA3 is basically a 5400rpm disk which is about as fast as a 7200rpm SATA2 Samsung Spinpoint avg'ing about 90MB/s.

 

When the new hardware comes back from checking/rma'ing i got a Xeon Low Power Chip ( https://ark.intel.com/de/products/75270/Intel-Xeon-Processor-E5-2650L-v2-25M-Cache-1_70-GHz ) (10c20t) with DDR3 ECC RDIMM R4 with 96GB and a bigger mainboard which itself could handle the 8 disks and i dont have to go for hw rc which only does jbod. When that happens i plan to future proof more, maybe buying new disks for not bottlenecking the cpu and ram. 

 

My endgoal is, recude the root BUT also, educate myself on doing nested virtualization. I plan to virtualize ESXi and unRAID and play around with infrastructure automatization, IaaS deployments or maybe XaaS stuff, which basically gives me the ability to create and run 25vms via one button click, which will be bottlenecked if only 1 disks is doing it, but the raid5 or rai6 approach gives me at 7disks the reads of a medium priced ssd. I know raid0 could give me the writes too, but if another Samsung fails everything is again gone, so im sticking with the "better safe than sorry" approach here :P because it happend yesterday the first time and i had to reinstall everything.

 

Also you will ask yourself, why not use PVE and unRAID in ESXi OR PVE and ESXi in unRAID OR like i just do plan now unRAID and ESXi in PVE. I'm really trying things out here right know and to be honest i dont even know why i not considered PVE at first and only ESXi or unRAID as host, because im using PVE over 7 years know i think. If you wanted to say im knowledged in one of these hypervisors its defenetly PVE for sure on a way bigger scale. But i also know the hickups of all of them. PVE has basically none, except for customing nested VT yourself, which is really really easy, about 3 commands and a reboot. ESXi (im new to that and im just trying it out right now, because im using it at work in the future) needs a good hw rc and compatible HW or you will never see a newer version of this software and the cool stuff like IaaS or XaaS costs a lot money. unRAID is really cool as a NAS with a custom SW raid function that never had a basis on a hw rc and gives you a lot of features, but its not made for clustering, HA, live-migration of VMs/containers, which the others do, again its a NAS. To be honest im not using HA or a cluster or live migration, but i use backups of containers via snapshots every night, which PVE masters perfectly, also everything im trying to later move here is in that format, so no migration process needed, but that wouldnt be a dealbreaker to do if its needed though.

 

I really hope you guys now understand a little better now, why i really love to have PVE but also unRAID too. (ESXi, meh, im new to that, it costs a lot of money and its really freaking picky, but here is where the cool enterprise stuff is, IaaS/XaaS VM deployments and stuff, but dont see me here as knowledged, i only now just the basics maybe.) But to be honest, from my pov, unRAID was always a freaking good NAS with really cool stuff that seeks a rival, but PVE is a free and basic ESXi alternative which not delivers any NAS function but many container formats except docker. All i know is, i want to have a basic robust system and virtualize the rest and 1000 things more in the future to evaluate things. Thats why its so hard to make your mind up, in which order you want to do things. 

Is speed more needed that power efficency?

Where is the bottleneck on which approach?

Can you live with the bottleneck?

Does nested hypervisor B or C even work with host system A?

Does the deeper nested system X even work with host or guest system A and B? (Again robust base)

Where to store my classic NAS "private" stuff?

How does a backup strategy look like? (raid even unraid isnt a backup, as you know)

Im sure i would come up with many more questions the more i type here :D

 

So please feel free to share your feedback, your knowledge, your experience and maybe even your plan to do this, because there is more than meets the eye and more eyes see more than just 1 pretty much confused and sad (rmaing my new server) person.

Share this post


Link to post

Oh my god, i couldnt accept RMAing all the parts and stuck to it and discoverd, that the mainboard is really picky with the PSU, the third one works, i just install the new rig and i feel like in heaven ;)

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now