Preparing to downgrade considerations


Recommended Posts

Hi, i will be moving my unraid to another server soon. At the moment I have 4 disks for data plus a parity disk however the new server has four slots so I will remove one of the data disks, I am happy how to do this. However all the disks are currently on a controller, I want to say H700 or something similar. However the set up was pass through I think, or set up as individual single disks. i.e. not a raid or anything. I don't think this device will fit the new server.

 

Can I expect the data on the disks to be seen do you think in the new server on the basis that controller is now gone?

 

Also on my current server I have 4 NICs in aggregation, again this card wont fit the new server. Is there a way to delete all that config?

 

I also have two 1 TB ssds in raid 0 as the cache. Moving forward I am thinking to just have one 500gig. Is the easiest thing just to delete current dockers and start again? My other thought was to install the 500gig in the current and cipy the dockers across but could I expect the dockers to see that data in the new server?

 

Any other considerations, I am a bit nervous. Not for data loss its all backed up, but if I can pull this off and not have to start afreash that would be awesome!

 

Sad to be closing down my current server, but it used 4 KwH a day, I just cannot justify it, I barely use 3% of the CPUs, but at idle it still sucks up the electric.

Link to comment
11 hours ago, garethsnaim said:

Can I expect the data on the disks to be seen do you think in the new server on the basis that controller is now gone?

Possibly Unraid will complain of invalid partitions, you'll need to try, if that's the case there's a way to fix it, assuming parity is valid, but it will require rebuilding all the disks.

 

11 hours ago, garethsnaim said:

Also on my current server I have 4 NICs in aggregation, again this card wont fit the new server. Is there a way to delete all that config?

It should detect the new NIC(s) and use that, but if there are issues just delete network.cfg and network-rules.cfg

 

11 hours ago, garethsnaim said:

I also have two 1 TB ssds in raid 0 as the cache. Moving forward I am thinking to just have one 500gig. Is the easiest thing just to delete current dockers and start again? My other thought was to install the 500gig in the current and cipy the dockers across but could I expect the dockers to see that data in the new server?

Convert pool to raid1, remove one of the devices, add 500GB device, remove other device.

  • Like 1
Link to comment

Well I made a right pigs ear of this lol. It turns out what ever was happening with that raid card, was not pass through so the disks were not recognised off of it, which is fine I had a back up, but god knows what I did with the cache. 

 

I think probably I was better to have just started again, but I didnt it all seems to be working again, however App data, ISO"s, Domains and System folders are unprotected from the parity disk. This is because I copied them to disk 1 before moving the array. Then I fked up the cache somehow so created a new pool and called that newcahce and moved those folders to it, and then moved them back.

 

Well blah blah but ultimately they are where they should be now, but unprotected. Could I expect the next time the parity disk is checked they will come into line or is there something else I need to do?

Link to comment

Hi Jorge, so just to confirm, I only have the one drive for cache now, so its fair not to expect that to show green? 

 

Slightly surprising as I had two disks before but they were in raid 0, which as far as I am aware means if one goes it all goes, and that always showed green across the board?

 

Link to comment
1 hour ago, garethsnaim said:

 

Slightly surprising as I had two disks before but they were in raid 0, which as far as I am aware means if one goes it all goes, and that always showed green across the board?

That is an error, the logic to display green only took into account whether or not there was only a single drive in the pool, all multi drive pools were considered redundant because that's the default when you add multiple drives, and it's assumed if you knew enough how to change to RAID0 or single mode you knew what you were getting into.

 

I don't know if that logic is in the list to be fixed or not.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.