SMorris Posted December 10, 2019 Share Posted December 10, 2019 Not sure what's going on. I was in the middle of moving a lot of video files from one drive to another using krusader and the whole process just quit (screen went to a blank white page). If I try and start the docker container krusader it looks like it tries to start but goes right back to saying "stopped". I also can't even remove the container (I keep getting an error message). Under the Log column it says "Removal in progress". When I click the link the log keeps repeating the below lines. e":"can not get logs from container which is dead or marked for removal"} e":"can not get logs from container which is dead or marked for removal"} I'm fairly new to unraid so any thoughts would be appreciated. Thanks. Quote Link to comment
trurl Posted December 10, 2019 Share Posted December 10, 2019 27 minutes ago, SMorris said: I'm fairly new to unraid so any thoughts would be appreciated. Sounds like your problems may be deeper than just this one docker. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post. Quote Link to comment
SMorris Posted December 11, 2019 Author Share Posted December 11, 2019 3 hours ago, trurl said: Sounds like your problems may be deeper than just this one docker. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post. Here is the diagnostics file. I appreciate the help. serverbeast-diagnostics-20191211-0015.zip Quote Link to comment
trurl Posted December 11, 2019 Share Posted December 11, 2019 16 hours ago, SMorris said: Here is the diagnostics file. I appreciate the help. serverbeast-diagnostics-20191211-0015.zip 227.94 kB · 0 downloads I have split your posts into their own topic because you have hardware issues with multiple disks and your problems likely have nothing at all to do with the binhex-krusader docker where you originally made your first posts. Quote Link to comment
trurl Posted December 11, 2019 Share Posted December 11, 2019 Why didn't you mention disk5 is invalid? Are you still trying to rebuild it? You have problems communicating with multiple disks so the rebuild isn't going to be good. Likely a controller issue or possibly power related. Why do you have so very many very small disks in your array? I always recommend fewer larger disks instead of more smaller disks. Larger disks perform better and are more cost effective. More disks require more ports, probably the root of your troubles. At some point more disks requires more expensive Unraid license, you need Pro with that many disks. More disks require more power. Most importantly, more disks are more points of failure, and you have multiple failures going on right now. I haven't looked at SMART for each of this large number of disks. Are any of your disks showing SMART warnings on the Dashboard? Did you test any of these disks before building a server out of them? Post a screenshot of Main - Array Devices. Quote Link to comment
trurl Posted December 11, 2019 Share Posted December 11, 2019 Forget about trying to actually use your server for dockers or anything else until you get these hardware problems taken care of. Do you have backups of anything important and irreplaceable? Quote Link to comment
JorgeB Posted December 11, 2019 Share Posted December 11, 2019 5 minutes ago, trurl said: Likely a controller issue It's almost certainly the controller, it's a 2 port Asmedia with port multipliers, they are a known problem. Quote Link to comment
trurl Posted December 11, 2019 Share Posted December 11, 2019 You really need to reconsider your entire hardware design. The current setup is going to be nothing but trouble. Quote Link to comment
SMorris Posted December 12, 2019 Author Share Posted December 12, 2019 18 hours ago, trurl said: Why didn't you mention disk5 is invalid? Are you still trying to rebuild it? You have problems communicating with multiple disks so the rebuild isn't going to be good. Likely a controller issue or possibly power related. Why do you have so very many very small disks in your array? I always recommend fewer larger disks instead of more smaller disks. Larger disks perform better and are more cost effective. More disks require more ports, probably the root of your troubles. At some point more disks requires more expensive Unraid license, you need Pro with that many disks. More disks require more power. Most importantly, more disks are more points of failure, and you have multiple failures going on right now. I haven't looked at SMART for each of this large number of disks. Are any of your disks showing SMART warnings on the Dashboard? Did you test any of these disks before building a server out of them? Post a screenshot of Main - Array Devices. Thanks for the reply. I know the hardware is not the greatest (10+ years old) but it was free. The drives are small and old but again free. This server has been used as a backup of all the data from another new lacie server that had two brand new drives fail at once. My plan has been to go through the existing stash of drives I have kicking around from years past before I spend the money on new ones since I have had bad luck on new drives failing and I don't want to spend the money on them right now. There are some SMART warnings and I have attached screen shots from the dashboard array. I do get a lot of "UDMA CRC error rate" warnings but only on certain drives. I have changed out cabling on those ones and the counts have dropped considerably but are still coming. I did not test the drives before building the server. I am now starting to do preclears on all the drives I add from here on out. I just did a preclear on a used drive and have attached screen shots. Thoughts? Quote Link to comment
JorgeB Posted December 12, 2019 Share Posted December 12, 2019 You need to get rid of that Asmedia controller with port multipliers or it will be nothing but trouble. Quote Link to comment
SMorris Posted December 12, 2019 Author Share Posted December 12, 2019 20 minutes ago, johnnie.black said: You need to get rid of that Asmedia controller with port multipliers or it will be nothing but trouble. Sorry, I'm a little confused as to where you are getting the 2 port Asmedia controller with port multipliers from. The cards that are in the machine are a 4 port RAID 5 card, Addonics 5x1 internal sata port multiplier, and two Ziyituod SATA Card 6 Port PCI Express Expression Card. Quote Link to comment
JorgeB Posted December 12, 2019 Share Posted December 12, 2019 This one 02:00.0 SATA controller [0106]: ASMedia Technology Inc. Device [1b21:0625] (rev 01) Subsystem: ASMedia Technology Inc. Device [1b21:1060] Kernel driver in use: ahci Kernel modules: ahci It exists in 6 and 10 port models, and they look the same on the syslog, they usually look like this (respectively): Quote Link to comment
SMorris Posted December 12, 2019 Author Share Posted December 12, 2019 8 minutes ago, johnnie.black said: I really appreciate the input. It's the first picture. What would be a recommended replacement? Quote Link to comment
JorgeB Posted December 12, 2019 Share Posted December 12, 2019 Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed. Quote Link to comment
SMorris Posted December 12, 2019 Author Share Posted December 12, 2019 9 minutes ago, johnnie.black said: Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed. Thanks. I will look into it. Quote Link to comment
trurl Posted December 12, 2019 Share Posted December 12, 2019 5 hours ago, SMorris said: The drives are small and old but again free. Penny wise but ... Just to make sure you understand how this all works. Parity by itself cannot recover anything. If you have a disk get disabled or go missing, Unraid must be able to reliably read every bit of all remaining disks in order to reliably rebuild the disabled or missing disk. So it is important that all disks in the array be reliable. And the more disks you have the more chance you have of one of them needing to be rebuilt, and the more chance you have of one of the remaining but necessary disks also giving trouble. Such as the situation you were in, trying uselessly to rebuild disk5 when many other disks were not working as needed. And it doesn't take an actual disk failure for a disk to become disabled. Your disk5 was disabled even though it hasn't really failed. Unraid disables a disk any time a write to it fails for any reason since the failed write makes it out of sync with parity and it needs to be rebuilt. 6 hours ago, SMorris said: I do get a lot of "UDMA CRC error rate" warnings but only on certain drives. Those are usually communication problems, but the counts will not reset even if you fix the problem. You can acknowledge these on the Dashboard and they will not give another warning unless the count increases. Other than those it just looks like a few Runtime Bad Blocks on each of those disks giving warnings. After you get your controller sorted maybe it will be OK as long as you are careful and diligent. Careful - always double check all connections, power and SATA, including power splitters, any time you are mucking about in the case. Diligent - Setup Notifications to alert you immediately by email or other agent when a problem is detected so you can fix a single problem before it becomes mutliple problems and data loss. You have already done a good thing by simply coming to this forum and asking for help. We really hate to see someone try random things before asking here just making a bad situation worse. And, of course, parity is not a backup. You must always have another copy of anything important and irreplaceable. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.