s_mason16 Posted March 16, 2018 Share Posted March 16, 2018 Yesterday(march 15), I was remot'ing in because I had a request to download a TV show. Get in no problem everything seems normal go to dockers and launch sonarr webui. it was having a problem load the show's data at first but its not a popular show so i thought maybe it just wasn't uploaded. click to add series and go to download s1, it's still having issues loading show info. won't dl shows. I get out and everything is weird. try to restart the docker and get a execution fail. other stuff is odd, so I run CA fix common problems, it says something is full and that's some error logging should be shut off, gives me the link to do so. and tells me to run the appdata clean up plugin. I run it and without looking (my mistake) delete my plex appdata file but haven't realized what ive done yet. thinking maybe the remote connection is why things are being wonky. and restart the server. and go home and try to do it. Find out I deleted plex, start a new one. downloads all the movie art and meta data fine. CA fix common problems. gives me a warning about having more than one key in the config, which it never has before. the CA apps page is acting weird and says it doesn't have internet connection. and several other little issues. Finally try to start downloading the tv show and sabnzb pauses because it says the disk is full. the cache drive is sitting at 53gb of 118gb and I go into maintenance mode and run a check disk and it is on readonly mode. and the parity drive and disk 1 have a smart issue but say it could be lack of power. Is the ssd in some sort of safe mode or did I just have 3 drives fail at the same time? is the ssd fixable, bad or do i need to just start over with a fresh install of unraid? Quote Link to comment
pwm Posted March 16, 2018 Share Posted March 16, 2018 First off you need to post the diagnostics so we can see what SMART data you talk about and also see in the logs if there are any interesting events. And you want to catch diagnostics before reboot so you don't lose the interesting log data. Quote Link to comment
s_mason16 Posted March 17, 2018 Author Share Posted March 17, 2018 (edited) I have this and I have internet access, and the server has two cables running to it. both showing activity. is there a way to fix the drive issue? Edited March 17, 2018 by s_mason16 added second image Quote Link to comment
Squid Posted March 17, 2018 Share Posted March 17, 2018 You need to post your diagnostics. Quote Link to comment
Squid Posted March 17, 2018 Share Posted March 17, 2018 8 hours ago, s_mason16 said: and tells me to run the appdata clean up plugin Oh and there is no such test in FCP that will suggest that. Quote Link to comment
s_mason16 Posted March 17, 2018 Author Share Posted March 17, 2018 my bad, thought that's what logs was, air head moment. unraid-m-diagnostics-20180316-2316.zip Quote Link to comment
Squid Posted March 17, 2018 Share Posted March 17, 2018 Mar 16 22:54:13 UNRAID-M root: Fix Common Problems: Error: Unable to write to cache /dev/sdb1 112G 51G 0 100% /mnt/cache Probably because of this ( @johnnie.black ) is the expert on this stuff Overall: Device size: 111.79GiB Device allocated: 111.79GiB The problems with docker may or may not be related to this (and it appears that there is an issue with the image for it), but until the cache problem gets rectified, there's no point in trying to fix the docker problems. Incidentally, since the docker image is messed up, it doesn't look like the service is running, so when you ran cleanup appdata, it recognized that nothing was installed, therefore it offered up (and you accepted) to delete the appdata for Plex. 1 Quote Link to comment
s_mason16 Posted March 17, 2018 Author Share Posted March 17, 2018 thank you for looking through that. That's what fcp was telling me. tbh on nearly the best case im just hoping it was an ssd failure. because then just a replacement ssd and at this point ill probably just start with a fresh install either way. (poor sonarr/radarr/sabnzb settings wahh). but that doesn't exactly explain the smart test 199 crc error on drive 0 and 1. but if its not the ssd then it's a long track down the issue game. or a malfunction with a different hardware piece. or if it was just a software/os error, the fresh install will resolve it, but will my configuration just make it happen again? Quote Link to comment
Squid Posted March 17, 2018 Share Posted March 17, 2018 8 minutes ago, s_mason16 said: ssd failure Its file system stuff. It's all fully allocated. I'm not a fan of using BTRFS on a cache drive unless you plan on running a cache pool and have a rock steady server. XFS is the better choice for a single drive. 11 minutes ago, s_mason16 said: smart test 199 crc error on drive 0 and 1 The 8 CRC errors on the parity drive is nothing to worry about. The 4000 on disk 1 (over 5300 power on hours) is. This is a cabling / connection problem. reseat the cables. You never saw them before because it was only monitored starting in 6.5 Quote Link to comment
JorgeB Posted March 17, 2018 Share Posted March 17, 2018 To fix the fully allocated btrfs filesystem see here: https://lime-technology.com/forums/topic/62230-out-of-space-errors-on-cache-drive/?do=findComment&comment=610551 Quote Link to comment
pwm Posted March 17, 2018 Share Posted March 17, 2018 An important thing with BTRFS is that it is using CoW - Copy on Write. This means that if you have a full drive and want to modify an existing file you will fail. The file system has to make the writes to empty space (Copy on Write). Then later it can remap the file content to point to the newer data and release the previous file content. This is different from almost all other file systems that only fail if you try to add new files but always allows you to write changes to existing files as long as you don't try to increase the file size. Lots of programs have hard-coded logic where they assume they can't fail modifying existing files which means they can behave very badly when used on a CoW file system. Developers just has to learn the hard way to not make assumptions that some operations can never fail. So rule #1 with BTRFS is to not run it full. Rule #2 and #3 are also to not run it full. Quote Link to comment
s_mason16 Posted March 18, 2018 Author Share Posted March 18, 2018 So i did a fresh install and used xfs for my single cashe setup, based on squids reccomendation. Am slowly installing all my dockers and plugins. Get plex started and it has the metadata downloaded. But my cache drive says its only using 2gigs. With xfs does the docker image and appdata no longer live on the cache and moves to the array? Before having issues i was at about 80gigs used after invoking the mover. Im not opposed to having the full docker image move to the array when not in use, but for the sake of performance are my plex clients going to notice the hit loading metadata and normal operations? Quote Link to comment
Squid Posted March 18, 2018 Share Posted March 18, 2018 Not at all. Appdata and the image should be on the cache drive if everything is good to go. Set the share for appdata to be use cache:prefer Set the share for your image (system?) to be use cache:prefer Settings - Docker - Disable the service Main - Array Operations - Mover Now Wait until mover is finished then restart docker 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.