Gnomuz

Members
  • Posts

    130
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Gnomuz

  1. Exactly the same situation here, I think an update of the container is required with the latest version of CrashPlan.
  2. Nothing new here, under Unraid, I haven't found any way to adjust clock settings with nvidia-settings, as this utility requires at least a "minimal" Xserver, whatever it may mean (my technical skills reach their poor limits). I have been able though to overclock an RTX3060ti with nvidia-settings CLI commands under Ubuntu on another rig, but still with a graphic environment like gnome. Perhaps a more skilled member of the community could try and identify which minimal setup / add-ons would be required for Unraid to let us use a utility like nvidia-settings, which for the moment is installed by the official nvidia drivers plugin, but is totally nonoperational. And I do agree that would be a nice-to-have feature in a near future, which would definitely provide the community with a major step forward compared with former unofficial integrations of the nvidia drivers.
  3. Well, it's been the normal behavior of the Nvidia drivers for a while. A "power limit" is enforced for the card by the vbios and drivers, and when the power drawn approaches this limit, the clocks are throttled. If you want to see what the power limits are and to which extent you can adjust them, you just have a look at the output of 'nvidia-smi -q' Mine looks like that on a P2000 (which is only powered by the PCIE slot, thus the 75W min & max) : Power Readings Power Management : Supported Power Draw : 65.82 W Power Limit : 75.00 W Default Power Limit : 75.00 W Enforced Power Limit : 75.00 W Min Power Limit : 75.00 W Max Power Limit : 75.00 W On this one, no adjustment is possible, as Min and Max Power Limits are the same. And it's almost constantly throttled due to the power cap when folding. Same output for a RTX 3060 Ti on another rig: [...] Clocks Throttle Reasons Idle : Not Active Applications Clocks Setting : Not Active SW Power Cap : Active HW Slowdown : Not Active HW Thermal Slowdown : Not Active HW Power Brake Slowdown : Not Active Sync Boost : Not Active SW Thermal Slowdown : Not Active Display Clock Setting : Not Active [...] Power Readings Power Management : Supported Power Draw : 194.23 W Power Limit : 200.00 W Default Power Limit : 200.00 W Enforced Power Limit : 200.00 W Min Power Limit : 100.00 W Max Power Limit : 220.00 W [...] For this one, you can see it is throttled because 194W are drawn out of a 200W limit. But this power limit can be adjusted between 100W and 220W through the command 'nvidia-smi -pl XXX', where XXX is the desired limit in watts. That's the way it works, and it makes overclocking/undevolting more complicated. The way to go for efficient folding is to lower the default power limit, while overclocking the GPU (-> same perf with less power) . But it's impossible afaik on an Unraid server, as you need an X-server to launch the required 'nvidia-settings' overclocking utility ... To summarize, nothing worrying in what you see, and not much to do. The only thing you can try under Unraid is raise the power limit to the max and see if you get better results for your folding. From my personal experience, minimal impact on performance, and a bit more power drawn 😞
  4. No need to use an incognito window for me, but I rarely use it anyway. I prefer FAHControl which gives you much more control and features. Maybe you can try and empty your browser cache ?
  5. Hello, During the night, the container has been automatically updated from 7.6.21-ls25 to 7.6.21-ls26. Since then, the existing GPU slot is disabled with the following message in the log : 08:22:15:WARNING:FS01:No CUDA or OpenCL 1.2+ support detected for GPU slot 01: gpu:43:0 GP106GL [Quadro P2000] [MED-XN71] 3935. Disabling. The server has been folding for at least 10 days, nothing else has changed in the setup (Unraid 6.9.0-beta35 with Nvidia Driver). Output of nvidia-smi : Sat Jan 9 09:26:32 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 455.45.01 Driver Version: 455.45.01 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Quadro P2000 Off | 00000000:2B:00.0 Off | N/A | | 64% 35C P0 16W / 75W | 0MiB / 5059MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ So, I think this release has an issue, probably far beyond my technical skills ... Does anybody know how, as a workaround before a fix, I could downgrade from 7.6.21-ls26 to 7.6.21-ls25 which was perfectly fine ? Thanks in advance for the support. Edit : I got support on linuxserver.io discord channel, problem temporarily solved. Thanks again ! The container has been updated with new nvidia binaries, and it seems it is no longer compatible with my installed drivers (455.45.01). I don't know if it would be OK with the regular drivers proposed by Nvidia Driver plugin, ie 455.38 in 6.9.0-beta35. For sure, the best solution would be to have the latest stable Nvidia drivers (v460.32.03) in Unraid, as the latest version of the container is fully compatible with them. The workaround to downgrade to the previous version of the container is to edit the template repository from "linuxserver/foldingathome" to "linuxserver/foldingathome:7.6.21-ls25". Folding again, which is the most important for the moment ! But this example clearly raises the problem of regular updates of Nvidia drivers by Limetech when 6.9.0 is stable ... Until then, I'll stick to ls25 version of the container. Edit 2 : As I had posted an issue on github, the developers proposed me to test a dev build. I did and it worked fine, so I suppose a new version of the container will be soon publicly available and others should not have the same issue. Thanks for the responsiveness to @aptalca and linuxserver team !
  6. It's a month now that I last posted on this thread, concluding we definitely needed help from the developers to debug this critical built-in function of Unraid, without any feedback. Diagnostics and screenshots to document the bug had been, as required by @limetech, provided on November 24th 2020, followed-up with a deafening silence. It's winter now in Europe, I've had a few power outages, and it was a mess to restart everything properly, especially because unplugging / replugging the USB cable is not that easy when you're away from home... It's just a crappy workaround, and I think nobody would seriously consider it as a stable production setup. So, I hope everybody will understand my bump If any further diagnostics, tests, attempts, ... are required, I'll be more than happy to provide them in a timely manner.
  7. Sorry, I've upgraded the firmware when the UPS arrived, but I used a Windows laptop directly connected to the UPS to do so. I never tried to use the firmware upgrade tool from a Windows VM in Unraid with the UPS passed through. It may work, but you'll have to test yourself. And I'm not aware of any linux CLI option to upgrade the firmware, as the APC tool is Windows only iirc. I take the opportunity of your APC UPS-related question to bump this thread as the situation is still the same for me in 6.9-beta 35. Who knows, a Christmas or @limetech miracle may happen, even if it's a bit late 😉
  8. Hi, Well, the backup process has been running continuously for 4 days and a half, so I can step back a bit more now. The data to be backed up is 952 GB locally, 891 GB have been completed, and the remaining 61 GB should be uploaded in 11 hours from now. So, the global "performance" should be 952 GB in 127 hours, an average of 180 GB per day. Roughly, that is 2.1 MB/s or 17 Mbps, which is consistent with the obvious throttling I can see in Grafana for the CrashPlan container upload speed. Data is compressed before being uploaded, so translating the size of the data to backup into network upload size is not totally accurate, but the level of compression will highly depend on the data you back up. For me, 893 GB backed up so far translated into 787 GB uploaded, i.e. a compression ratio of 88%. To sum up, if you get the same upload speed and compression ratio as me, your initial 12TB backup should generate 10.8 TB (10,830 GB) to upload at an average speed of 180 GB per day, i.e. circa 60 days ... Btw, as the upload speed of your internet connection seems to be 20/25 Mbps, the best you can expect to upload this amount of data is circa 40/50 days. So, you wouldn't be that throttled. For the 10GB per day you heard of, I suppose that's what you found in the CrashPlan FAQ. Let's say it's somehow their commitment, even if it's not legally binding, so they take very very little risk not to fulfill it, it's circa 1 Mbps ... As for the system resources, the container has an avg CPU load of 9% (Ryzen 7 3700X CPU), avg 1.2 GB memory load (out of 32 GB), and a constant Array I/O read of 2 MB/s. So, you can see it's running, but it has a low footprint on the global server load. I hope that can help you decide on your backup strategy.
  9. I've just installed the container and activated the 30-days trial. First, not the faintest issue to set up, very easy to install, I just set "Maximum memory" to 4096M to avoid crashes due to low memory. As for the upload bandwidth, my feelings are mixed so far. I have 2.5 TB to backup, and started the process on Sunday. Until Sunday 11pm (all times CET), the throughput was 16 Mbps (or 2MB/s). And then, it was between 32 and 40 Mbps all Dec. 21st long, which is the practical limit for my 4G internet connection. Great ! I then added other shares to the backup set, and since Dec 22nd, the average is back to 15/16 Mbps. So, somehow, we are throttled when backing up, that's obvious. They admit it between the lines on their FAQ, stating we are not individually throttled, but as the server-side bandwidth is shared, there are limitations. If it's true, they don't have enough bandwidth to supply a decent service. But I have a doubt, as the upload is obviously capped at 16Mbps most of the time for me, which should not be the case all day long, unless the server-side bandwidth is ridiculously undersized. Personally, I let the initial backup finish (4/5 days, less if I get decent speeds again), and I will see if the service is viable on a day-to-day basis. But I must admit I share your doubts ...
  10. This error is due to the absence of the 'nvme-cli' package in the container, and thus the 'nvme' command. You have to install the missing package through the "Post Arguments" parameter of the container (Edit/Advanced View). Here's the content of my "Post Argument" param for reference, properly working with nvme devices, to be adapted of course to your specific configuration if required : /bin/sh -c 'apt-get update && apt-get -y upgrade && apt-get -y install ipmitool && apt-get -y install smartmontools && apt-get -y install lm-sensors && apt-get -y install nvme-cli && telegraf' Edit the parameter according to your needs, the container will restart and you shouldn't have the error message in the log.
  11. Well, no upgrade on Sundays, family first ... As for write amplification, now that I can step back, I can confirm my preliminary findings about the evolution of the write load on the cache. I compared the average write load between 12/12 (BTRFS Raid1) and 12/19 (XFS) on a full 24h period, and the result is clear : 1.35 MB/s vs 454 kB/s, i.e. a factor of 3. As I can't believe the overhead due to Raid1 metadata management may explain such a difference, it obviously confirms for me a BTRFS weakness, whatever partition alignment is ...
  12. As for the (non) spin down issue, I understand it's brand new in RC1 due to the kernel upgrade to 5.9, when smartctl is used by e.g. telegraf to poll disks stats, and should be fixed in next RC. So, I keep away for the moment. That would really be pity to have all disks spun up in an Unraid array without getting the benefits from a file system which by nature keeps disks spun up but gives you performance and "minor" features such as snapshots or read caching in return, wouldn't it ? 😉 For the write amplification, I had cautiously followed the steps to align both SSDs partitions to 1MB iirc, so the comparison of write loads between BTRFS Raid1 and xfs cache is on an already "optimized" btrfs setup. I just threw an eye to the overnight stats, a 3 to 4 factor is confirmed so far.
  13. Thanks for your thoughts on my initial problem. I'm not on the latest beta RC1 because there's an issue which prevents disks from spinning down in configurations similar to mine (telegraf/smartctl). Waiting for RC2 and thus kernel 5.9+ to test again the SSDs connected to the onboard SATA controller, as it's a known issue with X470 boards. Btw I only had disconnection issues with one of the SSDs, but btrfs never coped with it. For sure, a Raid 1 file system which turns unwritable when one the pool members fails while the other is up and running and requires a reboot to start over is not exactly what I expected ... So let's say we share the same unreliable experience with BTRFS mirroring. For the moment I let the cache SSDs running via the LSI, have converted the cache to XFS and forgotten the mirroring as you suggested. I've immediately noticed that with the same global I/O load on the cache, the constant write load from running VMs and containers was divided by circa 3.5 switching from btrfs raid1 to xfs. At least, btrfs write amplification is a reality I can confirm...
  14. Everything seems to be running fine now, just a little feedback on the I/O load when switching from a btrfs Raid1 to xfs, on the same period of 4 hours comparing yesterday and today, with similar overall loads (2 active VMs and 3 active containers) : XFS : Average Write = 285 kB/s, Average Read = 13 kB/s BTRFS : Average Write = 968 kB/s, Average Read = 7 kB/s (I/O load per SSD ofc) So the write load, which is the one we all look after on SSDs, is 3.4 times more with BTRFS Raid1, and of course wears both SSDs equallly. The two SSDs had both MBR 1MiB aligned, as recommended. I anticipated a decrease of the write load switching from a redundant pool to a single device, and had the feeling BTRFS Raid1 had a significant I/O amplification anyway, but not that much. I'll provide stats over a longer period so that everyone can have a better idea of the impact of BTRFS redundancy on SSD wear.
  15. Well, pressure is going down, thanks for the mental coaching 😉 VMs and containers restarted properly after the restoration of appdata and the move of both domains and system. Moving from array to SSD was ofc way faster than the contrary, so I didn't have to wait too long. The only expected difference I can see is the appdata, domains and system shares are now tagged with "Some o all files unprotected" in he shares tab, which makes sense, as they are on a non-redundant XFS SSD. I checked the containers for the appdata assignment, and only found Plex to have /mnt/cache/appdata. But I remember now I have changed that over time, first reading /mnt/cache was the way to go, and then reading it was no longer useful and could raise issues. I think I now understand what the author had in mind ... Anyway, I also conclude that the containers which were created with the /mnt/cache and switched afterwards to /mnt/user have this hardlink problem (krusader and speedtest-tracker for me), and that can only be resolved by a fresh install of the container. As I switched Plex right now and it restarted properly, I now have three containers whose appdata can't be moved, only backed up and restored. But enough for today, again I learnt a lot thanks to the community, even if it was a bit the hard way 😆
  16. Sorry for not being clear, I must say I'm a bit worried, if not upset ... I have formatted the single cache device with XFS, deleted all data in appdata (on disk2 in my case), and restored the backup I just made. So far, it seems OK, all data of the appdata share is on the SSD, and only on the SSD ! And I checked one of the hardlink not moved I gave as an example, it has been restored by CA Restore. Now moving back system and domains to cache with mover, which hopefully should not raise any issue. Once docker is running, I'll check all containers and revert them to /mnt/user/appdata to avoid any problem in the future ...
  17. appdata is currently being restored from the last backup (2:39pm), if I see any issue, I will restore the daily one. I keep you posted !
  18. I think at least some of them (can't check by now, docker stopped) were mapped to /mnt/cache/appdata instead of /mnt/user, because I had read in this forum under the signature of well-known veterans it was a good idea to do so in terms of performance. Again, applying procedures without understanding the potential consequences was a bad initiative on my side ... I launched the backup in the current state, in a worst case scenario I have a daily backup of today 3:30am available which should do the job, I didn't do much this morning as you imagine !
  19. Thanks for this glimmer of hope @jonathanm , and don't worry, elegance is not my first priority by now 😉 Just a question (one more !) , now that everything is stopped (containers, VMSs and related managers), but appdata "spread" between cache (hard links) and array (all other data), can I : - launch a backup in this dubious state with CA Backup & Restore, - format the cache to XFS, - move system and domain (after setting them to cache : Prefer) to the "new" cache with the mover, - delete any track of appdata on the array, - set appdata to cache : Prefer, - restore appdata with CA Backup and Restore, or should I : - move appdata back to cache with the mover (cache : prefer), - back it up with CA Backup, - format the cache to XFS, - move system and domain (after setting them to cache : Prefer) to the "new" cache with the mover, - restore appdata with CA Backup and restore Or any other approach you would consider as safer. You understand I have a doubt on the restorability of a backup I would create in the current unsound situation of appdata.
  20. That's what I sadly understand also from other posts. But then how is it possible that containers creating these objects can be found in CA, as basically they are not compatible with one the building-blocks of Unraid, i.e. cache pool management ? I think I could simply uninstall the three apps, delete their data directory, and after the reformat and move back operation reinstall them from scratch. No problem for krusader, but : - I would lose historical data from speedtest-tracker (not such a big deal) - above all, I would have to rebuild from scratch my Plex library, which is heavily customized with custom posters, obscure music albums manually indexed, and I probably forget many other things. That would be hours if not days of patient work lost, and to make it simple an unrecoverable data loss. That was exactly what I expected not to happen with a dedicated NAS OS like Unraid. If there's no viable solution to get out of this dead-end, I must admit bitterness will be an understatement to describe my state of mind...
  21. I quickly analyzed the output of "ls -lhaR /mnt/cache/appdata", only links (hard or not, it's beyond my technical skills) apart from directories of course remain on the cache, and there are 50150 of them (searching lrwxrwxrwx), mainly in the /mnt/cache/appdata/Plex/... , as I expected. So definitely, the cache is far from being empty, and I'm quite sure formatting it now will prevent the 3 containers at stake from ever restarting after the move back from array to cache. For reference, I attach the output of ls -lhaR /mnt/cache/appdata after the partial move. Thanks anyway for trying to help me @itimpi ! Any guidance from @JorgeB or @jonathanm who have followed my nightmare from the beginning would be more than welcome, if of course they are somehow available. appdata-content.txt
  22. Of course, appdata is not empty, and as shown in examples, the deepest directories have something in common, they all contain what seems to be hardlinks. I've read in other threads there's a problem moving these hard links, I think it's my problem. I also checked that the hardlinks remaining in the /mnt/cache/appdata directories do not exist in the corresponding mnt/disk2/appdata. Needless saying I don't have the faintest idea why these links are there, I just installed the containers from CA.
  23. Well, nothing is as simple as expected or documented ... I followed the step by step @jonathanm was kind enough to write for me. All containers stooped, all VMs stopped, Docker stopped, VM Manager, stopped, and the three shares (appdata, domains and system) changed to cache:Yes, and mover launched. There were about 210 GB on the cache, so it took quite a long time. After mover ended, I was surprised to see cache had still 78.5 MB used. I browsed cache from the main tab. Only appdata is still there, so system and domains were moved properly. Inside appdata I have data left from 3 containers : - binhex-krusader - plex - speedtest-tracker Some of their data has been moved to the array, but a few directories are left in the cache. I relaunched the mover, but nothing new happened. I then browsed to the deepest directories left in the cache, and noticed that all of them content symlinks. Here is an example for binhex-krusader : root@NAS:/mnt/cache/appdata/binhex-krusader/home/.icons/BLACK-Ice-Numix-FLAT/16/actions# ls -lha total 172K drwxrwxr-x 1 nobody users 1.6K Jul 14 23:23 ./ drwxrwxr-x 1 nobody users 104 Jul 14 23:23 ../ lrwxrwxrwx 1 nobody users 26 Jul 14 23:23 archive-insert-directory.svg -> add-folders-to-archive.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 document-sign.svg -> document-edit.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 edit-entry.svg -> document-edit.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 edit-map.svg -> document-edit.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 editimage.svg -> document-edit.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 end_of_life.svg -> dialog-cancel.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 entry-edit.svg -> document-edit.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 filename-ignore-amarok.svg -> dialog-cancel.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 fileopen.svg -> document-open.svg lrwxrwxrwx 1 nobody users 14 Jul 14 23:23 folder_new.svg -> folder-new.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 group-edit.svg -> document-edit.svg lrwxrwxrwx 1 nobody users 14 Jul 14 23:23 group-new.svg -> folder-new.svg lrwxrwxrwx 1 nobody users 13 Jul 14 23:23 gtk-info.svg -> gtk-about.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 gtk-open.svg -> document-open.svg lrwxrwxrwx 1 nobody users 16 Jul 14 23:23 gtk-yes.svg -> dialog-apply.svg lrwxrwxrwx 1 nobody users 20 Jul 14 23:23 kdenlive-menu.svg -> application-menu.svg lrwxrwxrwx 1 nobody users 16 Jul 14 23:23 knotes_close.svg -> dialog-close.svg lrwxrwxrwx 1 nobody users 19 Jul 14 23:23 ktnef_extract_to.svg -> archive-extract.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 list-resource-add.svg -> list-add-user.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 mail-thread-ignored.svg -> dialog-cancel.svg lrwxrwxrwx 1 nobody users 20 Jul 14 23:23 menu_new.svg -> application-menu.svg lrwxrwxrwx 1 nobody users 29 Jul 14 23:23 object-align-vertical-bottom-top-calligra.svg -> align-vertical-bottom-out.svg lrwxrwxrwx 1 nobody users 19 Jul 14 23:23 offline-settings.svg -> network-connect.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 open-for-editing.svg -> document-edit.svg lrwxrwxrwx 1 nobody users 8 Jul 14 23:23 password-show-off.svg -> hint.svg lrwxrwxrwx 1 nobody users 19 Jul 14 23:23 relationship.svg -> network-connect.svg lrwxrwxrwx 1 nobody users 26 Jul 14 23:23 rhythmbox-set-star.svg -> gnome-app-install-star.svg lrwxrwxrwx 1 nobody users 20 Jul 14 23:23 selection-make-bitmap-copy.svg -> fileview-preview.svg lrwxrwxrwx 1 nobody users 16 Jul 14 23:23 stock_calc-accept.svg -> dialog-apply.svg lrwxrwxrwx 1 nobody users 8 Jul 14 23:23 stock_edit.svg -> edit.svg lrwxrwxrwx 1 nobody users 16 Jul 14 23:23 stock_mark.svg -> dialog-apply.svg lrwxrwxrwx 1 nobody users 14 Jul 14 23:23 stock_new-dir.svg -> folder-new.svg lrwxrwxrwx 1 nobody users 13 Jul 14 23:23 stock_view-details.svg -> gtk-about.svg lrwxrwxrwx 1 nobody users 16 Jul 14 23:23 stock_yes.svg -> dialog-apply.svg lrwxrwxrwx 1 nobody users 24 Jul 14 23:23 tag-folder.svg -> document-open-folder.svg lrwxrwxrwx 1 nobody users 9 Jul 14 23:23 tag-places.svg -> globe.svg lrwxrwxrwx 1 nobody users 18 Jul 14 23:23 umbr-coll-message-synchronous.svg -> mail-forwarded.svg lrwxrwxrwx 1 nobody users 18 Jul 14 23:23 umbr-message-synchronous.svg -> mail-forwarded.svg lrwxrwxrwx 1 nobody users 17 Jul 14 23:23 view-resource-calendar.svg -> view-calendar.svg lrwxrwxrwx 1 nobody users 16 Jul 14 23:23 x-shape-image.svg -> view-preview.svg lrwxrwxrwx 1 nobody users 13 Jul 14 23:23 zoom-fit-drawing.svg -> zoom-draw.svg lrwxrwxrwx 1 nobody users 13 Jul 14 23:23 zoom-fit-page.svg -> page-zoom.svg lrwxrwxrwx 1 nobody users 22 Jul 14 23:23 zoom-select-fit.svg -> zoom-fit-selection.svg Another example from speedtest-tracker : root@NAS:/mnt/cache/appdata/speedtest-tracker/www/node_modules/@babel/plugin-proposal-class-properties/node_modules/.bin# ls -lha total 4.0K drwxrwxr-x 1 911 911 12 Dec 18 03:48 ./ drwxrwxr-x 1 911 911 8 Dec 14 03:48 ../ lrwxrwxrwx 1 911 911 36 Dec 18 03:48 parser -> ../\@babel/parser/bin/babel-parser.js And of course a lot of similar examples for the Plex Metadata directory. I conclude there's an issue moving from cache to array directories containing symlinks. And I'm now in the middle of nowhere, with 99% of the cache data transferred to the array, 78.5MB remaining on cache and not "movable" for a reason beyond my technical knowledge. So I can't reformat the cache to XFS, I'm basically stuck in the middle of the operation without any next step on sight ... Any help will be appreciated.
  24. Thanks for the step-by-step @nblom, persistent syslog is back ! Joint efforts are finally rewarded 😉 Maybe this procedure should be added to the release notes, as the fix seems to have no effect without it. I will test once next release candidate is published, as I can't afford to have all the array disks spun up 24/7 with RC1.
  25. Thanks for the guidelines, I had read the wiki but was unsure about the steps for changing the cache pool type (from two devices to one) and reformatting the remaining cache device. I wanted to be sure this was not different from replacing a faulty cache device by a brand new one.