Unraid vs Proxmox = <3?

1 post in this topic Last Reply

Recommended Posts

Hi All,


I have this idea to use unraid as a NAS / storage backend for my proxmox. (via samba, i know ISCSI is faster but unraid is just that easy to use :))


I have a VM with large disks in it. It currently runs on my synology NAS as backend but it's becoming to slow.


I have a proxmox cluster with 3x HP 8300 Elite SFF machines running on which a couple of (windows and linux) vm's are running.

The backend consists of a HP Microserver gen8 with 16gb ram, 4x 2tb WD red's and a samsung 860 evo 250gb caching disk.


The problems i ran into;

1. my caching disk for the vm's share is nearly constantly full. (yes i could simply buy a bigger caching disk but i assume that unraid caches the entire disk.. while only a small portion of the disk should be cached right?) 


Is there any way to make this smarter/more efficient? For example by using Qcow2 disks of Raw type disks instead of VMDK disks like im doing right now because of the compatibilty if something goes wrong.


2. If i want to create a disk with 2.2TB of space for example. Unraid wont let me place it on the share because the system is designed to "overflow" files to other disks whenever a disk is full but when its larger then the biggest disk i got then you simply cannot do it.


Does anyone have a workaround this? Proxmox with split harddrives files? (so the vm itself still sees one drive) or another fix?


Thanks for reading this and trying to help me in advance!






Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.