tential Posted March 3, 2018 Author Share Posted March 3, 2018 Just now, johnnie.black said: Yes, don't forget to empty it and backup first before running the scrub. How do I run a scrub exactly? Started googling but it looks confusing as to how I execute the command. Does the drive need to be 100% empty, or just move as much files off as possible? It's mostly incomplete torrents so it's no big deal. Quote Link to comment
JorgeB Posted March 3, 2018 Share Posted March 3, 2018 On the main page click on cache and then scrub, check the repair checkbox. Scrub can be run with files, backup was just a precaution in case there's a problem. Quote Link to comment
tential Posted March 3, 2018 Author Share Posted March 3, 2018 Because it's part of a cache pool, I have to do it for the whole pool right? The option only shows up if I click the first drive. Quote Link to comment
tential Posted March 3, 2018 Author Share Posted March 3, 2018 (edited) Great, it's running now! Does it show a progress bar or anything or do I just leave it on Running until it completes? Edit: I see if I access the page again it shows the status. Edited March 3, 2018 by tential Quote Link to comment
tential Posted March 3, 2018 Author Share Posted March 3, 2018 scrub status for 3617a2c4-4ff3-4d25-a992-e5f0f2330a07 scrub started at Sat Mar 3 06:37:42 2018 and finished after 00:30:49 total bytes scrubbed: 1.29TiB with 170 errors error details: verify=169 csum=1 corrected errors: 170, uncorrectable errors: 0, unverified errors: I attached the diagnostic as well. Is this good? I hope it is! tower-diagnostics-20180303-0714.zip Quote Link to comment
JorgeB Posted March 3, 2018 Share Posted March 3, 2018 8 minutes ago, tential said: corrected errors: 170, uncorrectable errors: 0 All errors were correct so you should be fine, if you want post new diags after a couple of days of normal use and I'll take a look to make sure. Quote Link to comment
tential Posted March 3, 2018 Author Share Posted March 3, 2018 Yay! Thanks a lot! Great, am I safe to expand to my other two 8TB drives waiting in pool? Quote Link to comment
JorgeB Posted March 3, 2018 Share Posted March 3, 2018 19 minutes ago, tential said: am I safe to expand to my other two 8TB drives waiting in pool? Should be, server seems to be completely stable since the PSU change. Quote Link to comment
tential Posted March 3, 2018 Author Share Posted March 3, 2018 Ok sounds good. I started adding 2 drives to the system. Does this look right. Both are 8TB but it's telling me the total size it's clearing/adding is 8TB? Should it be 16 TB as the total size is 16 TB I'm adding as each drive is 8 TB? Quote Link to comment
itimpi Posted March 3, 2018 Share Posted March 3, 2018 It looks like it is doing them in parallel so my guess is that the size quoted is the largest disk as it will take the longest. Quote Link to comment
JorgeB Posted March 3, 2018 Share Posted March 3, 2018 That's normal, it's the largest drive size, though it's the same in this case, if you added a 4TB + 8TB drive with would show the same 8TB Quote Link to comment
tential Posted March 10, 2018 Author Share Posted March 10, 2018 Ok got everything up and running, everything has been fine, but one thing that has been plaguing me is that my memory usage slowly creeps up. I understand the "Linux Ate my Ram" thing, but everytime it hits 100%, all of my dockers become unresponsive. If I restart the dockers, I see the memory usage drop, and I can use them again. Also accessing Unraid Dashboard is slow, but still possible during this time. It's not the worst thing in the world, still better than my last setup where I restarted my download server far more often, but it seems like adding more ram would still lead me to this same issue, only just taking twice as long to have occur. I'm only running jacket, radarr, sonarr, and Transmission, so not sure why my memory usage is so high, and at the end, even restarting all the dockers gives me less and less space and it occurs more and more often. Could I have a memory leak or something that isn't allowing my system to release ram? tower-diagnostics-20180310-0919.zip Quote Link to comment
Squid Posted March 10, 2018 Share Posted March 10, 2018 18 minutes ago, tential said: Ok got everything up and running, everything has been fine, but one thing that has been plaguing me is that my memory usage slowly creeps up. I understand the "Linux Ate my Ram" thing, but everytime it hits 100%, all of my dockers become unresponsive. If I restart the dockers, I see the memory usage drop, and I can use them again. Also accessing Unraid Dashboard is slow, but still possible during this time. It's not the worst thing in the world, still better than my last setup where I restarted my download server far more often, but it seems like adding more ram would still lead me to this same issue, only just taking twice as long to have occur. I'm only running jacket, radarr, sonarr, and Transmission, so not sure why my memory usage is so high, and at the end, even restarting all the dockers gives me less and less space and it occurs more and more often. Could I have a memory leak or something that isn't allowing my system to release ram? tower-diagnostics-20180310-0919.zip See here Quote Link to comment
trurl Posted March 10, 2018 Share Posted March 10, 2018 Also, it's possible to misconfigure a volume mapping on a docker and wind up with it writing stuff to RAM. Quote Link to comment
tential Posted March 11, 2018 Author Share Posted March 11, 2018 (edited) I'm trying what squid is suggesting. I went to update, and it said the latest stable branch is 6.4.1. According to the quote, I should test out a RC of 6.5 Wish me luck.... Edited March 11, 2018 by tential Quote Link to comment
tential Posted March 11, 2018 Author Share Posted March 11, 2018 I might have made my issue worse what's going on with Disk 6 now? I googled, and it said the error doesn't mean the drive is failing, and unraid doesn't show an error, but just shows a smart d isk warning? I attached my diag. tower-diagnostics-20180310-1800.zip Quote Link to comment
trurl Posted March 11, 2018 Share Posted March 11, 2018 25 minutes ago, tential said: I might have made my issue worse what's going on with Disk 6 now? I googled, and it said the error doesn't mean the drive is failing, and unraid doesn't show an error, but just shows a smart d isk warning? I attached my diag. tower-diagnostics-20180310-1800.zip https://lime-technology.com/forums/topic/66327-unraid-os-version-641-stable-release-update-notes/ Scroll down to the 1st bullet in "Solutions to Common Problems" Probably be a good idea to take a look at the rest of that thread as well. Quote Link to comment
tential Posted March 11, 2018 Author Share Posted March 11, 2018 (edited) 5 hours ago, trurl said: https://lime-technology.com/forums/topic/66327-unraid-os-version-641-stable-release-update-notes/ Scroll down to the 1st bullet in "Solutions to Common Problems" Probably be a good idea to take a look at the rest of that thread as well. Started with the first post, but will do, thanks! Makes sense now. Edited March 11, 2018 by tential Quote Link to comment
tential Posted March 13, 2018 Author Share Posted March 13, 2018 After running a repair on the SSDs, one of my torrents in transmission says "Error: Structure needs Cleaning" How can I fix that ? Could I simply delete the torrent and restart it? Or do I actually need to run something on my SSD again? Thanks, Quote Link to comment
tential Posted March 19, 2018 Author Share Posted March 19, 2018 So, after awhile of uptime with no issues, I had errors last night while I was sleeping again. Not sure what caused this tower-diagnostics-20180319-0350.zip Quote Link to comment
JorgeB Posted March 19, 2018 Share Posted March 19, 2018 SMART looks fine for both disks, and read errors on two disks at the same time would suggest a connection issue.Unrelated disk10 has filesystem corruption, you need to check filesystem. Quote Link to comment
tential Posted March 19, 2018 Author Share Posted March 19, 2018 A connection issue for both disks or only one of them? Darn, I had been running for almost 9 days! Sorry to ask, but I believe the two disk in question would be connected to the LSI card right? I believe I need to power down the server, check the cables, do a parity check (is this necessary for the read errors?) and then go into maintenance mode and do a file system check on disk 10. Am I right? Thanks again for all the help, I'd just have given up like I had before. Quote Link to comment
JorgeB Posted March 19, 2018 Share Posted March 19, 2018 4 minutes ago, tential said: but I believe the two disk in question would be connected to the LSI card right? They are, and they share the same miniSAS cable. 4 minutes ago, tential said: I believe I need to power down the server, check the cables Check or swap disks with the other miniSAS cable, in case the same happens again. 5 minutes ago, tential said: do a parity check (is this necessary for the read errors?) Not necessary 5 minutes ago, tential said: and then go into maintenance mode and do a file system check on disk 10. Correct Quote Link to comment
tential Posted March 19, 2018 Author Share Posted March 19, 2018 this is the log Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 bad CRC for inode 8589934698 bad CRC for inode 8589934698, would rewrite would have cleared inode 8589934698 bmap rec out of order, inode 9495795062 entry 88 [o s c] [229862 770801601 1025], 87 [240103 770807746 12287] correcting nextents for inode 9495795062 bad data fork in inode 9495795062 would have cleared inode 9495795062 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 0 - agno = 2 - agno = 3 - agno = 4 bad CRC for inode 8589934698, would rewrite would have cleared inode 8589934698 - agno = 5 - agno = 6 - agno = 7 entry "flhd-sps13e11.mkv.part" in shortform directory 9495795009 references free inode 9495795062 would have junked entry "flhd-sps13e11.mkv.part" in directory inode 9495795009 would have corrected i8 count in directory 9495795009 from 2 to 1 bmap rec out of order, inode 9495795062 entry 88 [o s c] [229862 770801601 1025], 87 [240103 770807746 12287] correcting nextents for inode 9495795062 bad data fork in inode 9495795062 would have cleared inode 9495795062 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... entry "flhd-sps13e11.mkv.part" in shortform directory inode 9495795009 points to free inode 9495795062 would junk entry would fix i8count in inode 9495795009 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. This is no surprise to me, as I am having the same issue on my Transmission client with that file where it's telling me that the file system is corrupt for it. Do I also need to do something for my SSD? one of my torrents in transmission says "Error: Structure needs Cleaning" That file is from this torrent. Probably from when my SSD got filled all the way up. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.