NVME SSD in unRaid Server as VM Disk (Newbie) :)


Recommended Posts

First of all a warm hello to everyone

 

thanks for this very good software. So far I am at proxmox but would like to switch to unRaid.

 

I'm still new to unRaid. Have a little indulgence with me.

I'm currently running my old server for testing.


But does not want to go into great detail there.

In my current Proxmox server on which I actually want to install unRaid, a Ryzen 2700x with 32 GB Ram and a 1 TB Samsung 970 EVO NVME SSD is running.

Hard drives are expected to come in the next few days.

But now to my question I would like to use a conventional 1 TB SSD as cache drive and my VMs and Docker container on the NVME.

However, I do not know exactly how I did that instead of my unRaid test, I currently have a 256 GB SSD and 1 TB HADD in there, I tried to use the UD plugin to somehow provide the SSD without packing it into the array, which I can not do have.

 

Or is my thought wrong here.

 

Can I start up Unraid without HDDS to only operate my VMs as described?

 

Hope for your help.

Regards Maggi

Link to comment

No. You need at least 1 device in the array. It could even be a USB stick (which is NOT your Unraid USB stick) but there needs to be something.

 

The only reason that would require an additional SSD for cache (regardless if SATA or another NVMe), instead of just using the 970 EVO for cache, is that you pass the 970 Evo through as a PCIe device to a VM (i.e. exclusive use of the 970 Evo by the VM).

 

PCIe pass through would maximise performance and allow trim (as compared to ata-id pass through method - which you should not use with NVMe).

It's not that putting vdisk on the 970 Evo would be terrible. Performance would still be good and should still be better than a SATA SSD. It's just that it wouldn't be "max".

 

So instead of buying an additional SSD for cache, I would suggest you just stick to a SINGLE 970 Evo in the cache pool.

(Note: there were reports of poor performance with multi-drive btrfs cache pool and it seems to be correlated with Samsung unusual TLC block size. Even though I didn't have such issue, it probably is prudent to avoid multi-drive cache pool with Samsung SSD for now).

Link to comment

Ah thanks for the detailed answer,

 

So if I only use the SSD for VMS, this will only work for one or I can pass it on to several VMs.

 

Because that was the idea behind the fact that I put the VDISKS on this SSD to have the speed advantage.

 

Regards Maggi

Link to comment
1 hour ago, Maggi0r said:

Ah thanks for the detailed answer,

 

So if I only use the SSD for VMS, this will only work for one or I can pass it on to several VMs.

 

Because that was the idea behind the fact that I put the VDISKS on this SSD to have the speed advantage.

 

Regards Maggi

Pass through = exclusive use.

Vdisk = NOT pass through = can be shared i.e. you can put multiple vdisk files on the same NVMe.

 

My point was if you use vdisks then there's no need to buy an additional SATA SSD to use as cache. Just put the NVMe in cache and put the vdisks in cache (which is the NVMe) to simplify things.

Link to comment
27 minutes ago, Maggi0r said:

Oh you don't know that anymore I was in some forums and blogs.

 

If I find it again I'll post it here again. 

 

Regards Maggi

A lot of the paranoia over SSD wear (and resulting recommendations) were justifiable before TRIM was a thing and especially before the advancement of vertical NAND aka 3D TLC.

Nowadays, SSD's are way more resilient and capable of surviving beyond their rated endurance.

Even when they fail as the cells die and reserve cells used up, SSD's tend to fail gracefully, leaving users plenty of time to find a replacement.

(with the exception of Intel, which will lock the SSD in read-only state if all reserve cells are used up - but that would take a very long time anyway).

 

All of the excessive wear cases I have seen on here were either (a) system issue or (b) user error

  • An on-going example of (a) is the bug report with btrfs writing constantly to the cache pool at about 5MB/s or so despite (supposedly) no activity. 5MB/s translates to about 500GB / month which would be excessive wear, in the sense that it is on top of normal usage. I have several SSD's that average 250-500GB written per WEEK on normal usage and they are still refusing to die.
  • User error would be things like not running trim often enough (or not running trim at all e.g. in case of ata-id pass-through), mixing write-heavy and static data on the same SSD, etc.

 

In your particular case, you can put the 970 Evo as a single-drive cache pool, mount the 250GB as unassigned for write-heavy data (e.g. download temp), run trim frequently and Bob's your uncle.

 

(Tip: set default file system as xfs and if Unraid still forces you to format cache as btrfs then you have NOT set up the cache pool correctly as single-drive. Unraid will force btrfs for multi-drive cache pool even if only a single drive is assigned.)

 

Link to comment

Thank you very much for the detailed instructions.


As I said, I'm still new to unRaid.

 

I think then I will do the 970 as a cache SSD and with the TRIM plugin is also available.

 

And then do I have to integrate the other via the UD plugin?

And then, as it were, I wrote the VMs in the cache, so write the EVO and then copy them by backup.

 

Regards Maggi

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.