elnevera Posted July 7, 2019 Share Posted July 7, 2019 Hi All, A couple of days ago, I upgrades from 6.7 to 6.7.2. Everything looked to go okay. The yesterday I noticed some odd behaviour. I was trying to clear some entries from Radarr but noticed they kept reappearing when I refreshed the page. Then I got an email alert telling me that Fix Common Problems has found an issue. When I had a look it says "Unable to write to cache - Drive mounted read-only or completely full." It is not full and this is the content of the btrfs-usage.txt file in the diagnostics; Overall: Device size: 223.57GiB Device allocated: 30.02GiB Device unallocated: 193.55GiB Device missing: 0.00B Used: 7.43GiB Free (estimated): 215.27GiB (min: 215.27GiB) Data ratio: 1.00 Metadata ratio: 1.00 Global reserve: 16.56MiB (used: 0.00B) Data Metadata System Id Path single single single Unallocated -- --------- -------- --------- -------- ----------- 1 /dev/sdh1 29.01GiB 1.01GiB 4.00MiB 193.55GiB -- --------- -------- --------- -------- ----------- Total 29.01GiB 1.01GiB 4.00MiB 193.55GiB Used 7.28GiB 145.83MiB 16.00KiB I have attached the full diagnostics. Now I have been playing around I have noticed that Mover isn't moving files to the array after I changed the settings of a share to not use the cache. Mover runs but the files remain. I have noticed a few messages in the log that say the following; Jul 7 09:36:21 Mystique kernel: BTRFS error (device sdh1): bdev /dev/sdh1 errs: wr 87, rd 324, flush 0, corrupt 0, gen 0 Can anyone help? Cheers mystique-diagnostics-20190707-0850.zip Quote Link to comment
elnevera Posted July 7, 2019 Author Share Posted July 7, 2019 Just got an email now saying the cache drive has CRC errors totalling 52. Is the drive dying? Quote Link to comment
Vr2Io Posted July 7, 2019 Share Posted July 7, 2019 31 minutes ago, elnevera said: Is the drive dying? Seems not if according to its SMART. But have I/O error, it may just filesystem problem not SSD issue. Would you try start array in maintenance mode and perform BTRFS check/repair. Besides, best connect SSD to mainboard rather then LSI controller, then SSD can perform TRIM Jul 7 09:36:16 Mystique kernel: BTRFS info (device sdh1): forced readonly 1 Quote Link to comment
elnevera Posted July 7, 2019 Author Share Posted July 7, 2019 59 minutes ago, Benson said: Seems not if according to its SMART. But have I/O error, it may just filesystem problem not SSD issue. Would you try start array in maintenance mode and perform BTRFS check/repair. Besides, best connect SSD to mainboard rather then LSI controller, then SSD can perform TRIM Jul 7 09:36:16 Mystique kernel: BTRFS info (device sdh1): forced readonly Thank you for replying, I put the array into maintenance mode and ran the btrfs check status and got the following; Quote [1/7] checking root items [2/7] checking extents [3/7] checking free space cache [4/7] checking fs roots [5/7] checking only csums items (without verifying data) [6/7] checking root refs [7/7] checking quota groups skipped (not enabled on this FS) Opening filesystem to check... Checking filesystem on /dev/sdh1 UUID: 820c6483-f74c-4787-a59a-b6bed8429e8c found 7975202816 bytes used, no error found total csum bytes: 7129448 total tree bytes: 152928256 total fs tree bytes: 135495680 total extent tree bytes: 8011776 btree space waste bytes: 40543262 file data blocks allocated: 11645923328 referenced 7746879488 Is that the right thing to run? Cheers Quote Link to comment
Vr2Io Posted July 7, 2019 Share Posted July 7, 2019 This are read check only and result positive. found 7975202816 bytes used, no error found If stop / start array as usual, does same problem still. No much idea. 1 Quote Link to comment
elnevera Posted July 7, 2019 Author Share Posted July 7, 2019 Interesting, I did the following as part of following the advice above. Stopped Docker Stopped VM (Don't actually have any) Stopped the array Ran the btrfs check status Stopped the array Started the array normally Started the Docker service Started a couple of dockers. They worked okay - weren't working normall before. Tested deleteing a file that live in a share on the Cache drive and it actually went! All was good for a little while but I am back to square one again. Same error as before. Quote Link to comment
elnevera Posted July 7, 2019 Author Share Posted July 7, 2019 I've noticed that my appData share was sitting in the array rather than cache. I have set it to use cache only but running mover isn't moving anything. Quote Link to comment
Squid Posted July 7, 2019 Share Posted July 7, 2019 5 minutes ago, elnevera said: I have set it to use cache only but running mover isn't moving anything. You want use cache PREFER and then run mover. (But, you will have to go to settings - Docker, and disable the service, run mover, and when it's done re-enable the service (mover won't move any files in use) ) Quote Link to comment
elnevera Posted July 7, 2019 Author Share Posted July 7, 2019 3 minutes ago, Squid said: You want use cache PREFER and then run mover. (But, you will have to go to settings - Docker, and disable the service, run mover, and when it's done re-enable the service (mover won't move any files in use) ) Thanks Squid. I hav tried mover with the docker disabled and with prefer set but Mover runs for a few seconds and is then done. I've no idea how the docker appdata got onto the array but I seem to be in a situation where any writes or deletes that invlove the cache drive are just not working what so ever. I've tried using unBalance to move the files back to the cache drive but that isn't working either.... Quote Link to comment
elnevera Posted July 7, 2019 Author Share Posted July 7, 2019 When I try and use unBalance to move the files it has this message; skipping:deletion:(rsync command was flagged):(/mnt/cache/appdata/FileBot/amc-exclude-list.txt) the files never make it to the cache drive Quote Link to comment
elnevera Posted July 7, 2019 Author Share Posted July 7, 2019 (edited) Another update.... If I reboot the PC I get about 10 minutes of use before FCP alerts me to the fact the cache drive has gone read only. I was trying to rebuild my dockers after deleting and recreating the image but the install failed when the drive flipped to read only (see attached image) This is bizarre Edited July 7, 2019 by elnevera added image Quote Link to comment
Squid Posted July 7, 2019 Share Posted July 7, 2019 You do not want the SSD connected to your HBA. Connect it directly to the motherboard instead. Quote Link to comment
elnevera Posted July 7, 2019 Author Share Posted July 7, 2019 15 minutes ago, Squid said: You do not want the SSD connected to your HBA. Connect it directly to the motherboard instead. I've gone in and connected it to the motherboard. Booted back up but the crc count on the ssd has gone from 52 to 75 now. Quote Link to comment
Squid Posted July 7, 2019 Share Posted July 7, 2019 crc errors tend to be from bad cabling. It is possible though that it is a bad drive. 1 Quote Link to comment
elnevera Posted July 7, 2019 Author Share Posted July 7, 2019 Thanks Squid. So far so good. I've been able to do more than before. This SSD is 4-5 years old so it might be worth replacing. Can I ask why it makes a difference connecting it via the HBA vs Motherboard? It has been working fine for the last 2-3 months connected to the HBA. Cheers Quote Link to comment
Squid Posted July 7, 2019 Share Posted July 7, 2019 2 hours ago, elnevera said: why it makes a difference connecting it via the HBA vs Motherboard? Vast majority of SSD's do not support TRIM unless they are on the motherboard. (Or to put another way, HBA's only support trim on a very limited selection of SSDs) Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.