BlakeB
-
Posts
29 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by BlakeB
-
-
16 hours ago, itimpi said:
Your diagnostics show that you only have disk4 and disk10 in the include list for your _Music share, but that there are also files on disk5, disk6 and cache for that share. You also have it set to only have Primary storage as the array so mover ignores it for moving files from cache to array. Unraid never moves files between array drives so if you want the files to be moved off disk5 and disk6 you need to do this manually.
You appdata and system shares are set to Mover Direction to be cache->Array which means you want files moved to the array. It is more normal to have it set to be array->cache to maximise performance for these shares. The domains share is similar.
Quite a few of your other shares seem to have files on drives that are not on the include list for that share.
Whether any of this relates to your problem I am not sure but it will not hurt to get everything consistent and tidied up.
I guess I'm confused on the mover action here. I thought the action was download to cache then things move to the array, cache->array. My cache usually fills up quickly so not sure I understand the logic of switching it to array->cache.
-
The share is called _Music.
-
-
-
Yes, Disk10 looks like its mounted now, but Disk4 is still showing unmountable even after the parity rebuild.
-
10 minutes ago, trurl said:
Did you capture the output of check filesystem on disk10 so you can post it? Did you actually do the repair (without -n)?
I just tried without the -n
Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... sb_ifree 5382, counted 5482 sb_fdblocks 2328188686, counted 2345233769 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 4 - agno = 3 - agno = 7 - agno = 8 - agno = 9 - agno = 6 - agno = 5 - agno = 2 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (2:1818096) is ahead of log (2:1817289). Format log to cycle 5. done
-
Parity rebuild of Disk4 was successful. Disk10 is still showing unmountable. I ran a file system check on it and it didn't look like there were any issues.
-
14 minutes ago, JorgeB said:
Yes, that disk also needs a filesystem check, missed it before.
Sep 11 05:45:35 COLDHEART kernel: XFS (md10): metadata I/O error in "xfs_read_agf+0x6d/0xa3 [xfs]" at daddr 0x27fffffd9 len 1 error 117 Sep 11 05:45:35 COLDHEART root: mount: /mnt/disk10: mount(2) system call failed: Structure needs cleaning.
Also see here to see if it fixes a PCIe error that's constantly spamming the log.
Ah, so you're seeing what fills up my log all the time. I've wanted to solve that too.
Just to make sure this is right before applying. The thread has a lot of sub-topics it seems.
-
I'll post and check Disk10 when the rebuild is done. i don't think I can stop the array now that the rebuild is started, should be about 12 hours for the rebuild. I'll post the diag then, or tomorrow.
-
9 minutes ago, trurl said:
You don't mention if emulated disk4 was mountable. If not, you are rebuilding an unmountable filesystem, which you will have to repair after rebuild (assuming it is repairable).
Since you are rebuilding in maintenance mode, diagnostics taken now won't tell us whether disk4 is mountable since maintenance mode doesn't attempt to mount anything.
You could stop rebuild, start array in normal mode to see if emulated disk4 is mountable, and post diagnostics.
Or you could just let rebuild complete and we can deal with the consequences after.
This is what its currently doing in normal mode. I stopped the parity in maintence and started this, its rebuilding but also looked like disk10 and 4 are unmountable.
-
16 minutes ago, JorgeB said:
Check filesystem on the emulated disk4, run it without -n
Okay. I think I might be in business now. I went into maintenance mode without Disk4 added and corrected the Disk10 to read xfs, stopped the array, added Disk4 back, started again in maintenance mode and its currently reconstructing Disk4!
-
11 hours ago, trurl said:
It recognizes it, it just considers it disabled.
I guess we could New Config it back into the array. New Config will accept all disks assignments so none are disabled. Then we can re-disable disk4 so it can be rebuilt (after making sure emulated disk4 is mountable).
Before following these instructions, wait a few hours to see if @JorgeB has any other ideas.
This process isn't documented, but we have used it many times. It is very important to follow the instructions precisely.
- Go to Tools - New Config, Retain All, Apply. Not entirely sure it will keep the disk4 assignment since it thinks it is wrong. If it doesn't, assign disk4 before continuing, leave all other assignments as they are.
- In Main - Array Operation, check BOTH Maintenance mode and Parity valid checkboxes, then start the array.
- Stop the array, unassign disk4, then start the array. This will disable and emulate disk4.
Then post new diagnostics so we can see if emulated disk4 is mountable.
-
Thanks! I'll wait for @JorgeB and try that tomorrow unless they have another idea.
-
4 minutes ago, trurl said:
You can't rebuild from single parity when you have 2 disks disabled. Don't understand how another got disabled when one already was. Was disk10 already disabled when you decided to replace disk4?
No, Disk10 was fine when I had to take out disk4. Its still in there, I never disabled it. Disk10 is the same disk it's always been, that is my question, why it won't recognize it.
-
8 hours ago, trurl said:
Do you have the original disk4?
No, that is the disk that died and I sent back to Seagate. The new one that is precleared is the replacement that I need to parity repair.
-
I had a drive that died, I put it in to start the parity rebuilt but one of my other drives is not wanting to connect. I've tried swapping SATA power and data cables with other known good ones and that didn't resolve it. Unraid sees the drive but wants to emulate it, either way I can't start the rebuild because this one drive isn't reporting correctly. I did a smart test on it last night and that showed it fine.
-
Nevermind, I missed the format button at the bottom. I'm good.
-
-
On 6/8/2021 at 2:35 PM, BlakeB said:
I think the drive was bad. I couldn't even start a pre-read on it. I tried it in another computer last night and same thing. I dropped off the drive for RMA with Newegg this morning.
Confirmed my new drive was bad. The new replacement Precleared normally within a day and the parity was rebuilt in about 26 hours. Awesome.
-
22 hours ago, gfjardim said:
Please send me your diagnostics file (you can PM me if you wish)
I think the drive was bad. I couldn't even start a pre-read on it. I tried it in another computer last night and same thing. I dropped off the drive for RMA with Newegg this morning.
-
I've had more success with other drives on Preclear than this one. Three days in on the zeroing and at only 3%. Thoughts on what is going on?
# # # unRAID Server Preclear of disk 5QG05UME # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 4 - Zeroing in progress: (3% Done) # ** Time elapsed: 93:27:07 | Write speed: 0 MB/s | Average speed: 1 MB/s # # # # Cycle elapsed time: 95:34:39 | Total elapsed time: 95:34:39
-
Ended up doing a factory reset on the router after leaving it unplugged for 24 hours. New external IP and looks like I can access the internet in Unraid again.
-
13 minutes ago, trurl said:
So did you?
Seriously, the bots found you immediately and not just India. They are bots and will attack repeatedly and relentlessly.
Feb 12 14:35:00 Tower in.telnetd[4568]: connect from 183.171.198.27 (183.171.198.27) Feb 12 14:40:44 Tower in.telnetd[19067]: connect from 93.148.246.198 (93.148.246.198) Feb 12 14:43:01 Tower sshd[24810]: Invalid user admin from 77.247.181.165 port 1230 Feb 12 14:43:04 Tower sshd[24894]: Invalid user admin from 104.244.73.205 port 38237 Feb 12 14:47:41 Tower sshd[3754]: Invalid user user from 128.199.206.22 port 56242 Feb 12 14:52:51 Tower in.telnetd[17287]: connect from 212.210.173.74 (212.210.173.74) Feb 12 15:06:23 Tower in.telnetd[19088]: connect from 172.251.43.127 (172.251.43.127) Feb 12 15:39:21 Tower in.telnetd[5546]: connect from 108.6.48.190 (108.6.48.190) Feb 12 15:53:18 Tower sshd[8503]: Failed password for root from 174.138.15.47 port 47416 ssh2 Feb 12 15:53:20 Tower in.telnetd[8673]: connect from 117.192.88.101 (117.192.88.101) Feb 12 15:54:36 Tower in.telnetd[11862]: connect from 42.192.161.117 (42.192.161.117) Feb 12 16:08:23 Tower in.telnetd[14800]: connect from 103.121.234.144 (103.121.234.144) Feb 12 16:08:43 Tower in.telnetd[15657]: connect from 104.178.219.195 (104.178.219.195) Feb 12 16:17:17 Tower in.telnetd[5035]: connect from 179.96.241.235 (179.96.241.235)
https://www.abuseipdb.com/check/183.171.198.27 Malaysia
https://www.abuseipdb.com/check/93.148.246.198 Italy
https://www.abuseipdb.com/check/77.247.181.165 Netherlands
https://www.abuseipdb.com/check/104.244.73.205 Luxembourg
https://www.abuseipdb.com/check/128.199.206.22 Singapore
https://www.abuseipdb.com/check/212.210.173.74 Italy
https://www.abuseipdb.com/check/172.251.43.127 United States of America
https://www.abuseipdb.com/check/108.6.48.190 United States of America
https://www.abuseipdb.com/check/174.138.15.47 Netherlands
https://www.abuseipdb.com/check/117.192.88.101 India
https://www.abuseipdb.com/check/42.192.161.117 China
https://www.abuseipdb.com/check/103.121.234.144 India
https://www.abuseipdb.com/check/104.178.219.195 United States of America
https://www.abuseipdb.com/check/179.96.241.235 Brazil
And no doubt others since then if your server is still on the internet.
Nice, A Verizon tech in India for my router is the one that even suggested enabling DMZ when this all started. I disabled it a few hours ago. Should I just force an IP reset at this point?
27 minutes ago, strike said:Try setting the DNS server 1 to 1.1.1.1 ,server 2 to 1.0.0.1 and server 3 to 192.168.1.1
Nothing, same result.
-
4 minutes ago, strike said:
Try setting the DNS server 1 to 1.1.1.1 ,server 2 to 1.0.0.1 and server 3 to 192.168.1.1
Nothing, same result.
Subfolder in a share not reading.
in General Support
Posted
I'm at 1.5tb cache and 102tb array. I figured that was a decent setup.