Changing disks


Recommended Posts

So I started my Unraid journey a few days back and running it for a few days with brand new hardware does show me the benefits. 

Now I've decided to take a plunge into phase 2. 

I was running Unraid on a single nvme drive (phase 1) and it ran well. I was able to get everything that i needed working and any issues sorted with the help of great members of this forum, community apps and a few videos on YouTube . Now with phase 1 out of the way I've put 4x4tb WD reds that came out of my freenas server. They aren't brand new and 1 of them has an unreadable sector. I'm running preclear on them and I guess it will take about 3-4 days, which I understand and prepared.

Now I want to take the nvme out of the array and repurpose it. Maybe use it as a cache (for lightning fast speeds) to run all the VMs dockers and plugins while use the 4tb as a raid5 or raid6 data drives. 

If I understand correctly (naming disks A/B/C/D) i should put drive A as parity disk and B as data disk 2 and let it rebuild, then replace the NVME with drive C and let it rebuild? Then add NVME as a cache.

A few more facts: I'm limited to 1gbps connection throughout the house and I'll be a single user. My main reason for this machine is plex and gaming on a windows 8.1/10 VM (once 3080 is available) and exploring Linux distros. Currently at asus dark hero x570 with x5900, 64gb 3600Mhz Corsair vengeance pro, p2200 (for plex transcoding).

 

Link to comment

If the nvme was disk1, then adding parity and letting it build will allow you to replace disk1 with a spinner and let it rebuild then nvme will be free and you can use it as cache and add other spinners to the array as needed.

 

1 hour ago, bhootz said:

use the 4tb as a raid5 or raid6 data drives. 

The parity array does not do raid5 or 6, see here for an idea of how Unraid differs from traditional RAID:

 

https://wiki.unraid.net/Overview

 

 

  • Like 2
Link to comment

So I guess all went well. Thanks @trurl.

image attached.

I managed to create a parity. Took almost 8 hours to do so. Now I've replaced the nvme with a spinner and seems like things will build over the next few hours.

I did have a few doubts which I need help with. I did find a few 2019 posts to this topic but not sure if they are still relevant now.

How exactly do I 'make' the dockers/plugins/VM go to nvme?

Move the app data of all to the NVME and have mover shift them back to data disks? 

That would involve me mounting the NVME as a cache and moving app data to the nvme. Would I have to repeat the process every day since I think mover does clear out the cache? Would that also mean that for a few minutes/hours the dockers/plugins would not be available since the appdata path would be pointing to a blank space?

I also saw a video from space invader who does pass through the NVME to a vm to gain speeds. He made the NVME as the boot drive and the virt disk as a data disk. I liked that idea too. Could I use one (2tb) nvme to handle all of the VM/dockers/plugins? I know on read and writes alone, the 1gbps connection in my house with orbi ac3000 (887mbits backhaul channel) will be filled even before it can make the spinners sweat.

Maybe I'm asking a lot, but then I guess I'm expecting a super fast VM/plex without breaking my bank balance.

Screenshot_20210310-024013__01.jpg

Link to comment

Usually you want appdata, domains, and system shares to stay on cache for performance, and so array disks can spin down since these share always have open files. These shares are created as cache-prefer when you enable Docker and VM Manager in Settings. If you get those shares created before having a cache disk, then they will be on the array, but Mover will move cache-prefer shares to cache... except, Mover can't move open files, so you have to disable Docker and VM Manager in Settings then run Mover.

 

If you want more specific advice post your Diagnostics ZIP (Tools - Diagnostics).

Link to comment
20 hours ago, trurl said:

Usually you want appdata, domains, and system shares to stay on cache for performance, and so array disks can spin down since these share always have open files. These shares are created as cache-prefer when you enable Docker and VM Manager in Settings. If you get those shares created before having a cache disk, then they will be on the array, but Mover will move cache-prefer shares to cache... except, Mover can't move open files, so you have to disable Docker and VM Manager in Settings then run Mover.

 

If you want more specific advice post your Diagnostics ZIP (Tools - Diagnostics).

YOU MAY IGNORE THIS AND MOVE TO THE NEXT POST.

 

So I have finished replacing my NVME with a spinner and I am now adding two more disks into the array (its still rebuilding) with 1 disk as parity and 1 as Data. I've left the NVME unmounted and my TrueNAS server is mounted to provide data for various dockers (eg Plex). In the long run (short term goal) I want to shift all my vm and jails (dockers) to my Unraid box since that server is now ancient and can barely transcode x265 (4k>720p) and TrueNAS lacks gfx card support. In the long (long term) I might buy new HDDs and shift all the files to Unraid too and go holey moley on it. One of the disks (being old) had some unrecoverable errors I think despite only showing 4 errors on the pre clear read test (SMART), so I replaced it with another one lying around. Like I said, it'll mostly never go into very rigorous server use than a PC use. I mean I could have run the Windon'ts Server edition, but I'm personally not a fan of theirs. That's why back in the day (2014) I chose FreeNAS despite it's heavy requirements. I'm not going to say which is superior to which but definitely loving the flexibility of Unraid so far.

 

If you find this unsuitable I shall upload another file after the rebuild completes.

unraid-diagnostics-20210310-1839.zip

Edited by bhootz
More relevant and updated next post
Link to comment
6 hours ago, bhootz said:

So I have finished replacing my NVME with a spinner and I am now adding two more disks into the array (its still rebuilding) with 1 disk as parity and 1 as Data. I've left the NVME unmounted and my TrueNAS server is mounted to provide data for various dockers (eg Plex). In the long run (short term goal) I want to shift all my vm and jails (dockers) to my Unraid box since that server is now ancient and can barely transcode x265 (4k>720p) and TrueNAS lacks gfx card support. In the long (long term) I might buy new HDDs and shift all the files to Unraid too and go holey moley on it. One of the disks (being old) had some unrecoverable errors I think despite only showing 4 errors on the pre clear read test (SMART), so I replaced it with another one lying around. Like I said, it'll mostly never go into very rigorous server use than a PC use. I mean I could have run the Windon'ts Server edition, but I'm personally not a fan of theirs. That's why back in the day (2014) I chose FreeNAS despite it's heavy requirements. I'm not going to say which is superior to which but definitely loving the flexibility of Unraid so far.

 

If you find this unsuitable I shall upload another file after the rebuild completes.

unraid-diagnostics-20210310-1839.zip 102.44 kB · 1 download

So the pool is now compete (for now) 4x4tib disks with one as parity is running successfully (diagnostics attached).

First things first... Oh man I miss the NVME VMs. My windows 10 VM took like what seemed an eternity to load and cpus were idling at about 10% for 4C/8T config for the VM. I know when the same VM was on the NVME they were RED (100%) during startup and shutodwn.

Another thing I noticed, the data transfer speed has reduced. When I was transferring from TrueNAS to Unraid (1 nvme array) I was hitting 100-110 MBps which is the theoretical bandwidth of a gigabit and though I don't remember using a laptop on wifi to connect the 2 was also giving about 10 MBPS write to Unraid.

With the spinners, the performance has taken a big hit (which I'm still not understanding completely). My VM speed, though it started at about 32MBps went to about 50 MBPS and now is loitering at about 10-15MBps. I honestly thought that the 1GBps gigabit would choke before the 6GBps sata, hence the speeds would always remain about 100MBps. Using the laptop in between makes it hit ~2MBps.

Latest diagnostics attached (post array rebuild)

unraid-diagnostics-20210311-0132.zip

Link to comment

As you found out, you really don't want a VM vdisk on the parity protected array. Most people set up a cache pool (can be 1 device) to hold their VM vdisks.

 

Updating parity in realtime on the spinning array means every write is pretty much 1/4 the raw media speed, due in large part to latency. The parity array is best suited for WORM type storage, media collections are ideal. Read speed is pretty much raw media speed.

Link to comment
3 hours ago, jonathanm said:

As you found out, you really don't want a VM vdisk on the parity protected array. Most people set up a cache pool (can be 1 device) to hold their VM vdisks.

 

Updating parity in realtime on the spinning array means every write is pretty much 1/4 the raw media speed, due in large part to latency. The parity array is best suited for WORM type storage, media collections are ideal. Read speed is pretty much raw media speed.

Thanks for the reply. And yes I think a few more things I realized were that virtio lan drivers are limited to 500mbps which is not too great to transfer data. For plex etc I think it should be fine, since most movies are under ~100 mbps, so I do have a lot of headroom.

You mention setting VM on cache. What about unassigned devices? Since I have a 2tb nvme and currently its a single drive won't that make better sense? I might be wrong here, but I'm looking for ideas and options. 

Also I did see a video where space invader passes through a NVME, partitions it and uses one part to boot his VM. What he doesn't show is what happens to the XFS formatted part of the drive? Is it available to the server/dockers even with the VM running? I would assume so but then again it went uncovered in the video.

I guess I'm ready for the next logical step into my journey of Unraid, where I find out how NVME suits me best.

One final question? To 'move dockers' on NVME moving the app data folder (in the "Edit" page) of the docker to NVME will suffice or do I need to do something more?

Link to comment
9 hours ago, bhootz said:

You mention setting VM on cache. What about unassigned devices?

6.9 allows multiple cache pools, so it's preferable to just set up another pool rather than using UD.

 

The best way to move user shares is to assign the appropriate cache pool and mover setting, make sure there are no open files, then run the mover.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.