Robot Posted September 1, 2021 Share Posted September 1, 2021 (edited) Hey all! I am going to fully upgrade my server, all hardware will be new, it will actually be a new computer altogether. Current server has 7x WD Red 4TB drives (2 parity + 5 data). New server will have 4x Seagate IronWolf 14TB drives (1 parity + 3 data). Now, I want to fully move all data from current to new server. Obvious way is connecting them to the same router and start copying. 1Gbe so 115MB/s. It'd take around 45h considering how much data I have. Question: Can I take out of the current server the data disks, plug them into the new server as unassigned devices, copy data from them, and put them back into the current server? Thanks! Edited September 1, 2021 by Robot Quote Link to comment
JorgeB Posted September 1, 2021 Share Posted September 1, 2021 7 minutes ago, Robot said: Can I take out of the current server the data disks, plug them into the new server as unassigned devices, copy data from them, and put them back into the current server? Yes, if you want parity to remain valid make sure you mount them read-only, it's an option with UD. You can also disable parity in the new server for faster simultaneous multiple disk copy. 1 Quote Link to comment
JonathanM Posted September 1, 2021 Share Posted September 1, 2021 5 minutes ago, Robot said: Can I take out of the current server the data disks, plug them into the new server as unassigned devices, copy data from them, and put them back into the current server? Yes. However, unless you manually mount the drives read only, parity will need to be corrected when you put the drives back. Also, I doubt you would see a huge speed difference, and since you won't be sitting in front of the server waiting for it to finish, the difference between 45 hours and 40 hours isn't going to be meaningful. In my opinion the marginal gains aren't enough to justify all the risk of physically moving the drives around. 1 Quote Link to comment
Robot Posted September 1, 2021 Author Share Posted September 1, 2021 Thank you both! Something else that just came to mind: I guess I should set the shares to not use the cache pool until all data is copied over, right? 32 minutes ago, JorgeB said: Yes, if you want parity to remain valid make sure you mount them read-only, it's an option with UD. You can also disable parity in the new server for faster simultaneous multiple disk copy. Thank you! 32 minutes ago, JonathanM said: Yes. However, unless you manually mount the drives read only, parity will need to be corrected when you put the drives back. Also, I doubt you would see a huge speed difference, and since you won't be sitting in front of the server waiting for it to finish, the difference between 45 hours and 40 hours isn't going to be meaningful. In my opinion the marginal gains aren't enough to justify all the risk of physically moving the drives around. Interesting. I thought the speed would be around 150 - 200MB/s, which would be like 50% faster (assuming 175MB/s vs 115MB/s). This would've meant 20h instead of 40+, and since I need the server for work stuff... Such a shame. I will then do it over Ethernet. Quote Link to comment
JonathanM Posted September 1, 2021 Share Posted September 1, 2021 2 minutes ago, Robot said: Interesting. I thought the speed would be around 150 - 200MB/s, which would be like 50% faster (assuming 175MB/s vs 115MB/s). Possible if you don't assign a parity disk, like JorgeB said. 3 minutes ago, Robot said: I guess I should set the shares to not use the cache pool until all data is copied over, right? Correct, no point in copying the data twice. Cache is good for small dumps that can get flushed overnight while the array is otherwise idle. 1 Quote Link to comment
Robot Posted September 1, 2021 Author Share Posted September 1, 2021 Perfect, everything's solved. Thanks! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.