Need help getting away from ZFS and expanding my Storage


Recommended Posts

Hi guys, as the title says I need help in getting away from my drive current setup and set it up without ZFS again.

Current Situation:
At the moment I have 5 drives installed

2 x 256GB NVME M2 SSDs on the motherboard
image.thumb.png.d24be7f1b59d181719f68b39725d6bf8.png

 

3 x 18 TB EXOS X18 Drives that are running in a raidz1 setup

image.thumb.png.180fb3dffde6f09517b60edc7b079a2b.png

image.png.3be18e503db277c15b70c425311f4d92.png

I'm now getting an additional 5 drives.
3 more exos 18 TB drives and 2 500GB SATA SSDS.

My goal would be:

An array with 6x18 TB Exos drives (1 drive as parity) for all data which is currently on the zfs raid.

The 2 new 500 GB SATA SSDs should work as a mirrored cache so that I still have high speed when copying files to the server

And the current array of 2 x 256GB NVME M2 SSD disks should still carry all the data of the apps that are currently installed like it is now. e.g. docker containers, 2VMs, plex, jdownloader and so on.

Is that even possible to set it up like that ? I would have 2 Arrays then, right ? 

Could anyone help me with like a step by step instruction what I have to do to not lose any data ?

I thought about just adding the 3 HDD Disks and setting them up as the new array and then just copying all the files from the raidz1 to these 3 drives. After that I would delete the raidz1 disks and just add them to the array.

This is my first unraid installation and I'm not very experienced with linux so I'm not really sure about if this will work at all.

I'm glad for any help or advice

 

Link to comment

I'm not familiar with ZFS, but the only issue I see with your process is making sure none of your containers or configs references /mnt/disk1. Check that before you proceed.

 

If you set up with all stock /mnt/user references, then you are good to...

 

1. Assign one of your new 18TB drives as parity. Let it build, then do a correcting check. Zero errors is the only acceptable result. Assuming no errors...

2. Assign a second new 18TB drive as Disk 1. Let it rebuild, then do a non-correcting check. Zero errors, etc...

3. Create a new pool, call it cache. Assign the two original array 256GB SSD's to it, format it. Verify the resulting BTRFS RAID1 is healthy.

4. Disable docker and vm's services, not the containers themselves, make sure there isn't a DOCKER or VMS items listed in the menu, the words should be gone, not listed here.

image.png.eaadc6cf329e15f9a7f0d71fc29ff5e6.png

5. Run the mover. If everything was left stock, all the appropriate shares should transfer themselves to the new pool named cache. You can check by going to the shares tab and compute all, it will tell you what lives where.

6 Assuming the system, domains, and appdata shares are all on the "cache" pool now, re-enable the docker and vm services. At this point all your stuff should be working exactly as it was when we started, with the exception that you have a new pool name cache, and disk 1 is 18TB with a bunch of free space.

7. Add another new 18TB as Disk 2. Let it clear, then format it.

8. Create shares for the data you want to migrate from the ZFS pool.

9. Copy as much as you can fit to the 36TB free space on the main array

 

At this point you have a choice, purchase another 18TB drive to keep both the Unraid main array parity protected and the ZFS pool intact while you copy the rest of the data, or degrade one of them by dropping either the parity drive or one of the ZFS array drives. I am not familiar enough to tell you which is the safer choice if you degrade one, my advice is to have enough drives to keep both redundant. After you have figured out how you are going to proceed, purchasing another drive or degrading one of the arrays by dropping a drive, you will add that drive as Disk 3, and complete the data copy.

 

10. Create another pool with 2 members, call it whatever you want, "transfer", "scratch", "cache2", pretty much anything besides cache. Assign it to whichever shares you want it to work with.

11. After everything is working as desired, do whatever is necessary to dissolve the ZFS array, and keep those disks unused until needed, to replace failed drives, or as additional space when the main array drops below 18TB free. I don't recommend adding more drive than you are actively using, it's a waste of drive hours and electricity.

 

What I have outlined is by far not the quickest method, I tried to keep everything as safe and simple as possible, with verifiable progress and steps that can be undone or redone if things aren't working as planned.

Link to comment

Unfortunately I kinda screwed up the process and lost all my docker container data and VM data. It's not a big deal, i can set up everything from the start.
But to avoid any mistakes I have another question:

I would like to put alle dockers and VMs on the NVME_Cache Pool.

At the moment I have the following Array:
image.thumb.png.071a18f4342ba00393972a5fc403e7bf.png
And 2 pools:
image.thumb.png.32df39c0fe7704360963753e39a59eb2.png
image.thumb.png.89b5a8b97433c98fbba6b5948d887d56.png

How do I setup that all the Docker Containers, VMs so that all the Container Data will land on the NVME_cache ?'

For example when setting up Krusader:

image.thumb.png.ef96ec9b5f9271c9180d35dcaa56150e.png

What would be the correct paths ?

And what do I have to do to get rid of all the older which from my previous installation which mostly contain no data?
image.png.4c00c6547bba70480afd25ce6a3576fd.png
I have an empty appdata, domains and isos folder under disk 1, also under user and user0 which is kinda weird. but for some reason in all the system folder there is still some files from the previous installation I guess. docker and libvirt .img file ...

Any hint/clue which of these folders can be deleted before I start installing all apps and vms again ?
 

Edited by matuopm
Link to comment

I also have an additional question.

I'm currently copying all my files from my zfs raidz1 to the normal unraid array and I have writing speeds around 60-120 MB/s. Its pretty random in between these numbers even when copying very large files. A thing that I dont understand really is:

I can read from the ZFS raid up to 500MB/s and the Exos x18 18TB can write up to 250 MB/s

But on the array I'm having constantly reads and writes on Disk1 and on the parity drive.
Which is probably the reason why the speed is that low, right ?

Is it really copying from the raid to Disk1 in the array, and then at the same time reading from Disk1 and writing to the parity?
It feels like it is like one extra step which slows the process down severely.

Wouldn't it be enough copying from the zfs and writing to disk1 and parity and the same time ? Why is there an extra read process from the Disk1 while data is being copied on it ?

 

Edited by matuopm
Link to comment
20 hours ago, matuopm said:

I have an empty appdata, domains and isos folder under disk 1, also under user and user0 which is kinda weird. 

It is expected.    These are different views of the same folders.   It is probably worth reading the User Shares part of the online documentations accessible via the ‘Manual’ link at the bottom of the GUI to understand why.

  • Like 1
Link to comment

Thank you for the Turbowrite hint. This really solved the speed issue. I'm getting constant max speeds now on large files which is great.

I have to look into the user shares thing. I don't really get the way linux/unraid handling things there. From my point of view I would just delete all folders appdata domains isos and system and create those folders on the nvme_cachepool. and then link all the paths to the nvme_cachepool.
image.png.a2f521614a5f6a69b112f45cb55f2b04.png

I'm just not sure if thats correct 🤷‍♀️

Edited by matuopm
Link to comment
4 hours ago, matuopm said:

Ok I just read some part of the user shares documentation. Do I get it richt that its better that I edit the usershares (appdata and so on) to be on the NVME_Cache Pool instead of changing the path of the docker containers.
 

Yes.  But it depends upon how you've referenced the share in the apps.

 

If you've referenced /mnt/cache/appdata/blah then you've got to change them all to reference the new path.  If you've referenced /mnt/user/appdata as the system is designed to do then moving the appdata share to the new pool and then changing the share settings is all you need to do.

  • Like 1
Link to comment

I got the krusader docker up and running and it seems that it is running exactly like the way i wanted to be. all data stored on the nvme cache.

But theres some stuff I still don't really understand about linux. the things that are stored in the /mnt are not shares and it confuses me ...
i don't really understand the concept of the things that are in /mnt and why there seems to be identical stuff  ... anyone got a link to a good documentation on /mnt and unraid

 

 

Edited by matuopm
Link to comment
8 hours ago, matuopm said:

I got the krusader docker up and running and it seems that it is running exactly like the way i wanted to be. all data stored on the nvme cache.

But theres some stuff I still don't really understand about linux. the things that are stored in the /mnt are not shares and it confuses me ...
i don't really understand the concept of the things that are in /mnt and why there seems to be identical stuff  ... anyone got a link to a good documentation on /mnt and unraid

 

 

You were earlier given the link to User Shates documentation, and that explains how User Shares are seen at the Linux level and how this is folder/file logical view that is independent of the physical drives.

 

If you scroll down to the Disk Shares section it describes how array drives and pools are seen at the Linux level.   If you look at this level you see the files/folders that are specific to the selected physical drive/pool.

 

The fact that there are both logical and physical views to the same folders/files explains why there can be what at first glance might appear to be duplication.

 

If there is still confusion after reading all that section of the documentation then if you can perhaps point out exactly which bits you find confusing so that it can be examined with a view to clarifying/expanding the wording.

Link to comment

Ok at the moment it seems all my problems have been solved. I only have one thing which I dont really understand.
I have appdata, domains, isos and system on the NVME Cache Pool. Its a 2 Disk pool, so its Raid 1 I think ?
But the names of the shares are all orange, which means they are not protected. But in this case as a Raid 1 it is kinda protected for one disk failure, right ?

Anyone got a clue how can I fix that ? 

Edited by matuopm
Link to comment
11 hours ago, matuopm said:

Here you are 🙂

The colors in the dashboard have a different meaning, which can be confusing I guess, there it means those shares are set to cache only, if you click on "Shares" you'll see the green ball or orange triangle before each share name indicating if the shares are on redundant storage or not, though for pools it only checks the pool is multi device, not if it's actually redundant, so if for example the pool is raid0 it will show protected when in fact it's not.

Link to comment

Ok there is still one question left.

I have this 6 Disk Array and have a 2 SSD Cache Pool and I'm not sure how mover really works or how I set mover up correctly with the cache pool.

My thought was:
When I copy something from PC to a share that uses the cache pool. It will copy it on the cache first and the mover will move that stuff to the array at a certain time. So that the cache is empty again. A feature so that HDD don't spin everytime i copy something up unless the cachepool is full.

But the cache pool is full now and I started mover but it does not move anything away from the cache pool to the array.

Is it me having a false understanding what mover and a cache pool really does or is it supposed to work that way ?

Screenshots as example.
image.thumb.png.62bd287a3a1cc19e191b2ddf45b8b399.png
  

image.thumb.png.a274ab27e43560a22777e5cbb4449ea3.png

Actually it writes something at the moment but I'm not copying anything at the moment 🤷‍♀️

Is it possible to user the cache pool as a "write cache" only, when I copy something on the array. And every 8 hours or so it should clear the write cache and write stuff to the array ?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.