Unraid virtual in Proxmox resize xfs disk


Recommended Posts

Hi everyone,

 

new here. :)

 

So currently I am testing unraid as trial, since I decided to use unraid (instead of other NAS software solutions out there) to make my smb shares and also make use of the really nice App store using some docker containers.

I kind of like the webUI of unraid as well. :)

 

So basically I installed unraid on a Proxmox VE Server on a USB Stick and boot the unraid VM from USB in Proxmox (USB device passthrough).

My Proxmox has a relatively big ZFS storage (at least for my needs :) ).  Running another server is not really an option. I want to achieve this on the same server, the Proxmox VE server. 

Besides, also increasing speed througput with the ZFS array, it gives me a good possibility to backup the unraid VM itself to a different server/storage with all it's data. I probably would do another backup with unraid itself, just to have the data backuped again. (but probably some more important stuff again and not all files)

 

Anyway, so my thought was, since I use XFS in unraid to just give unraid one disk. There is no need for parity disk. I added this disk as array disk. Now I tried to resize the disk (increase in size) but unraid doesn't seem to like this throwing an error:

Unmountable: Unsupported partition layout.

The disk itself can be mounted in unraid through command line (files are there), but I cannot start the unraid services through WebUI.

 

So how do other of you do this? Probably asking those people who virtualized unraid in Proxmox, ESX, ...

 

I kind of really would not want to add several disks over time, bigger one and copy data over. (since at some point in the future this will not be possible anymore when the ZFS pool gets filled).

The smb share would get quite a lot of files and some TB in size.

 

So anyone?

XFS itself can be resized (at least increase in size). How can this be done in unraid without formatting the drive again because partition layout changed.

Thanks a lot for ideas and input on this matter.

 

Regards

 

PS: there is no productive data on that disk, just test files.

PSS: I would like to avoid brtfs and zfs in unraid if possible and rather stick to xfs.

Link to comment
16 hours ago, Fubbel said:

How can this be done in unraid without formatting the drive again because partition layout changed.

I do not think this is possible.   Unraid recognises disks by the combination of the serial number and size.   If either value changes then it is considered to be a different drive.

Link to comment

Hi itimpi,

thanks for your answer. Although that's not what I hoped for. :(

Further testing on this shows that the disk always needs to be formated as soon as it gets extended.

I found some older thread (2017) here in the forum, where someone wrote (for some Areca Raid Controller) that unraid expanded the file system after using new config in unraid.

But that doesn't seem to be the case anymore.

 

Hey I like your case and the Icy Dock ;) Got the same here, served me well for many years. 

Link to comment

Hi again,

 

maybe I am missing something here? But in some way I got it solved.

May I have other opinions on that please.

 

Not with xfs, but with zfs. Maybe there is a way with xfs as well? Or some other fs, not sure.

 

Here is what I did.

 

VM with 2 vdisks. one small one for array, one desired size for data. For testing I used 32GB and 100GB.

 

Install ZFS Master Plugin.

Added a small disk to VM, and added this to the array. (XFS) (the array drive seems necessary, since otherwise we cannot start the pool.)

Added a pool under Pool Device with 1 disk. Added the bigger vdisk to the pool. Format with ZFS. Added datasets to it (dataset don't seem to be necessary)

Test ZFS Pool over smb.

Increase vdisk size in proxmox. (I added 50GB, and later as second test another 50GB repeating all the steps increasing to 200GB)

From there I basically increased zfs size by increasing the disk size first and then the zfs pool itself. These steps required the Linux console.

The result was a zfs pool first 150GB, then 200GB in size. data still on the drive.

I would know of no reason why this should not work on ESXi or other similar platform.

 

If I got this correct, the array goes through unraids logic. (there is some unraid driver loaded, which I think is doing the magic). All new or changed drives need to be formatted. (If I am not mistaken, the unraid driver checks through a DB or file and knows the drives? Maybe some information saved on the drive itself? But basically it prevents the user from having unraid mount that drive without format)

the pool devices don't go through unraid logic and get "mounted". (and disk changes are not detected and forcing you to format the drive) Which, if I am correct, results that no parity drive can be used for the pool? Fine for my setup. Maybe other unraid features missing?

 

What do other people think about this method? I saw some other threads here (unanswered) by people asking for the same thing on virtual host.

Are Pool Devices intended for the usage of data? Or like only for cache?

Am I correct that without an array disk there is no way to start the array and therefor also not able to start the pool device?

Am I missing somthing here?

 

Regards

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.