Dartanis Posted November 24, 2018 Share Posted November 24, 2018 Hi there, Read that it was possible to transition my 2 SSDs from a RAID1 config to a RAID0 and thought it fit my use cases as just a temp holding location a lot more since I have Mover run hourly anyways. Used "-dconvert=raid0 -mconvert=raid1" in the Balance field of my cache window, saw it going to work and thought great, now I'll have a 500GB cache instead of mirrored 250GB. Saw the value not change for total space in cache but looked at the btrfs filesystem df which listed: Data, RAID1: total=2.00GiB, used=1.99GiB Data, RAID0: total=18.00GiB, used=3.86GiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=1.00GiB, used=322.72MiB GlobalReserve, single: total=19.66MiB, used=64.00KiB So I thought it had created a bit of both, rebooted the server and saw the value had not changed; despite being a very bad idea in retrospect I tried spinning down the typically mirror drive and seeing how the cache behaved. At this point a user for my Plex server informed me they lost connection. I tried it on my local network, server reports offline, try to access webgui through ip:32400, refused despite still being forwarded. Checked my nextcloud docker, unable to access it as well, same Refused to Connect error. Annd that is where I am currently stuck, tried reloading the Plex docker but getting the same error, though in bridge mode it seems to time out instead of giving me the error. I decided it would be best to remove my cache drives from the pool, reboot and then re-add after running -dconvert=raid1 -mconvert=raid1 to try and revert my changes. But the filesystem df still shows the RAID0 data on there too. So this is the point where I am stuck now, basically trying to reset my cache and forcibly reloading my appdata but unsure of how to proceed with it Quote Link to comment
JorgeB Posted November 24, 2018 Share Posted November 24, 2018 Please post your diagnostics: Tools -> Diagnostics Quote Link to comment
Dartanis Posted November 24, 2018 Author Share Posted November 24, 2018 Attached diagnostics pantheon-diagnostics-20181124-0105.zip Quote Link to comment
JorgeB Posted November 24, 2018 Share Posted November 24, 2018 There's a problem with the cache filesystem, that's why the balance didn't complete, best way forward is to backup any important cache data, re-format (either raid1 or raid0) and restore data. Quote Link to comment
Dartanis Posted November 24, 2018 Author Share Posted November 24, 2018 Nothing important lived on the cache drives, how exactly do I go about making sure the cache is set to RAID0 and reformatting it? Not sure where I'm supposed to initiate this action from. Quote Link to comment
JorgeB Posted November 24, 2018 Share Posted November 24, 2018 Stop, array, clear both SSDs with: blkdiscard /dev/sdX start array, format cache pool, convert to raid0 Quote Link to comment
Squid Posted November 24, 2018 Share Posted November 24, 2018 After this is all sorted out, you need to upgrade to 6.6.5. 6.6.4 was only available for ~1 day and has a big issue with schedules not running which may or may not have contributed to the issue. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.