Jump to content

How to move cache drive to different slot


hocky

Recommended Posts

Hi,

no, i didn´t move disks in powered on state so far. So that shouldn´t be the source of the problem.

However, adding new and/or removing faulty drives during operation is one of my desired modes of operation.

Both the ports on the motherboards (at least six of the eight available ports) and the backplane i´m using supports hot-plugging. So that´s sth i´ll be testing in the near future.

 

Link to comment

I think it is best to assume that hot-swapping array or cache disks will not work. And it wouldn't really gain you anything anyway. unRAID isn't like a RAID system where it will automatically start rebuilding a replaced disk. You still have to go from a state with the array stopped to make any disk assignment changes. If a disk is already installed you can stop the array and select it for assignment, but there really is no point in trying to hotswap array or cache disks. You can try hotswap with Unassigned Devices and it may work. I do it all the time with eSATA and Unassigned Devices.

Link to comment

Hm, good point. If i have to stop the array anyway, then hot-swapping doesn´t bring much benefit.

I´m planning to run a couple of services in VMs on the box. These services would go down when stopping the array as well. 

I´m not so sure if UNRAID is the right system for me in that case. As far as i understand UNRAID supports only one array. If i could my VMs and data on different arrays, that would prevent the VMs from going down every time a do a change on the data array.

 

Link to comment

Yep, i need to think about it.

Propably, i´m too much influenced from testing. During testing, i´m changing relatively often. During normal operation, there shouldn´t be too much changes, that´s true.

I guess it´s also good practive to first set up a stable array and then add additional dockers or VMs on top of it.

Link to comment
On 8/31/2018 at 3:44 AM, John_M said:

But one many users tend to overlook in their haste to get everything working.

Well, the cause of that could be the trial period. If you want to test everything, you are a bit in a hurry and tend to do the second step before the first.

Link to comment

Hi, 

i moved around the disks a bit today and indeed, the parity disk was recognized if attached to a different sata port on the motherboard. However, this didn´t work with the cache drive.

Not a big deal since a missing cache drive only degrades performance, but any idea why this could be?

 

Link to comment
1 hour ago, hocky said:

Not a big deal since a missing cache drive only degrades performance, but any idea why this could be?

 

Saying that a missing cache only degrades performance suggests you aren't fully aware of how cache is typically used on unRAID. Many people will have their docker images, docker application data and their VMs living exclusively on SSD cache. While it is true they configure things this way for performance reasons, it is also true that when they have their system configured that way, if cache is unavailable, not only will they not perform well, their dockers and VMs won't work at all.

 

As for why you had some problem connecting cache on another port, we can only speculate without more information. That is what Diagnostics are for.

Link to comment
8 hours ago, trurl said:

As for why you had some problem connecting cache on another port, we can only speculate without more information. That is what Diagnostics are for.

I´ll do another test later with running diagnostics before and after the change.

8 hours ago, trurl said:

Saying that a missing cache only degrades performance suggests you aren't fully aware of how cache is typically used on unRAID.

I read that as well. My impression was that putting them on the cache drive is the unsafest place to put them because there is no redundancy (unless you add another cache drive). At the moment, my dockers/vms are placed on the array itself. Actually i´m looking for a solution that they are able to keep running even when the array is down.

Link to comment
1 hour ago, hocky said:

I´ll do another test later with running diagnostics before and after the change.

I read that as well. My impression was that putting them on the cache drive is the unsafest place to put them because there is no redundancy (unless you add another cache drive). At the moment, my dockers/vms are placed on the array itself. Actually i´m looking for a solution that they are able to keep running even when the array is down.

The reason that VMs are rarely run from the array is that you have parity protection in place the VMs take a huge hit in performance.

 

Although it is not possible at the moment any solution that allows VMs to run all the time will require them to be located somewhere other than the main array.

Link to comment
1 hour ago, itimpi said:

The reason that VMs are rarely run from the array is that you have parity protection in place the VMs take a huge hit in performance.

Ah OK. Good point.

1 hour ago, itimpi said:

Although it is not possible at the moment any solution that allows VMs to run all the time will require them to be located somewhere other than the main array.

So, would placing the VMs on an unassigned drive (via the plugin) a possible solution?

Link to comment
1 hour ago, hocky said:

Ah OK. Good point.

So, would placing the VMs on an unassigned drive (via the plugin) a possible solution?

Not at the moment as the libvirt service required by the VMs is stopped/started on array stop/start.

 

i already run my VMs from a UD drive so am hopeful that at some point the libvirt restriction will be removed.

Link to comment
1 hour ago, itimpi said:

i already run my VMs from a UD drive so am hopeful that at some point the libvirt restriction will be removed.

Thanks for the information. I think that´s also what i´m going to do. Seems to me to be the "cleaner" solution than putting them on a cache drive.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...