-
Posts
233 -
Joined
-
Last visited
-
Days Won
1
Nodiaque last won the day on March 8 2022
Nodiaque had the most liked content!
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
Nodiaque's Achievements
-
Nodiaque started following Log not working after rotation , Docker Host Access to Custom Network stop working , Docker on eth0 macvlan unable to reach host and 4 others
-
Hello everyone, I have an issue that isn't new. I had this issue on 6.11 and even other version before that and I think it's about time I start talking about it so we can find how to solve it. Randomly, my docker will stop being able to access device across network. Here's my exemple: I have a mongodb set on my custom network "my-bridge". It's a normal custom network, nothing fancy. Mostly all my docker are on that network. Then, I have another docker, for exemple unifi and pihole, set on br0/eth0 macvlan. This one have an ip that is the same range as my server. In normal operation, I can go into my unifi docker shell and ping my unraid server ip, other docker and computer on the same network (docker pihole, router, etc) and I can also reach dockers on other network like mongodb (which in the end sit at my server ip). Now, when host access to custom network fail, 2 things happen. First, my unifi and pihole container cannot talk anymore to my unraid server (and at the sametime all docker in my custom network my-bridge). If I try to ping my unraid server, it fail. At the sametime, all my dockers not on the macvlan also cannot communicate with these docker. Found out the hard way this was the problem with many container that were having weird problem in DNS resolution (everyone was pointing to my pihole and I had firewall redirection to pihole) was because of that. The fix is simple by have impact. I must turn off docker, disable host access network, apply, enable host access, apply, then start docker and everything start working again. Problem is, it will eventually fail. So my big question, when it will happen, what you guys need me to get out of there? I won't be right in front of it when it happen so I won't be able to say "it happen at 2:15am", but I'll at least know when. I do have syslog on a share and I can get diagnostic. I know this is the normal procedure for bug report, my goal here is since it's a random bug, I want to maximize my change to give all the information that could be needed. If I need to enable something right now while it's working to see it fail, I'll do it. Thank you
-
So I "fixed it" but I'm not sure what break it. What I did is turn off docker, disable host access to custom network, enable it, then start docker and bam, they can now communicate properly.
-
Hello everyone, I have a problem with all my docker that are on the network eth0 macvlan. If i try to ping/reach the host ip, it will fail. I do have enabled "Host access to custom networks:". I have bridging and bonding disabled. Thank you
-
I'm still trying to understand how I upload my png. Someone said put it on the usb then link it, but what would be the url then?
-
Out of nowhere, my refresh now fail. It try to reach various sonarr website and it cannot reach. I tried to ping the website from anywhere and it reply except on this docker, and I can ping google from this docker. and when trying to run import list task, I get this error
-
Hello, I've got 2 questions. Where on the flash drive and how did you link to them afterward? Thank you
-
Maybe update the description in the CA and remove it entirely? Or make it flagged in common error. I didn't knew it was dead until I visit this thread like 3 months later.
-
DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7
Nodiaque replied to jbartlett's topic in Docker Containers
Hello, I just installed this in hope I could check if both my cache drive have similar speed (I think one of them is very slow). They are both SSD and detected by the apps but it says it cannot be benchmark because it cannot found the mount point. I did attach /mnt to the docker in /mnt/unraid but still having the same error. Is it because they are cache drive in mirror? Thank you -
ok. So there's no way to know what are the 538 errors?
-
Hello, I had a file corruption on one of my disk yesterday. I finally have it fixed running xfs_check -L. After that, I start a parity check without the "write correction". It founded 538 errors. Is there a way to see these error? Is there a way to run the fix without running the entire parity check again? Thank you
-
Ah nice. About parity and rebuild, can something be done with that? I guess I should run a parity check after? Edit: IT worked! Only about 4 files with content in them. One seems to be a docker config file, another seems to be linux boot files (weird since it's on the usb). I think it's backup files since everything is now running without any error.
-
Ok, this is the output. Do I manually mount or start the array not in maintenance mode? Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata agi unlinked bucket 30 is 102889438 in ag 5 (inode=10840307678) agi unlinked bucket 46 is 102889454 in ag 5 (inode=10840307694) agi unlinked bucket 61 is 102889469 in ag 5 (inode=10840307709) agi unlinked bucket 47 is 1071805807 in ag 7 (inode=16104191343) agi unlinked bucket 17 is 13602577 in ag 4 (inode=8603537169) agi unlinked bucket 19 is 13602579 in ag 4 (inode=8603537171) agi unlinked bucket 20 is 13602580 in ag 4 (inode=8603537172) agi unlinked bucket 23 is 13602583 in ag 4 (inode=8603537175) ir_freecount/free mismatch, inode chunk 11/346247680, freecount 2 nfree 1 finobt ir_freecount/free mismatch, inode chunk 11/346247680, freecount 2 nfree 1 agi unlinked bucket 40 is 346247720 in ag 11 (inode=23968567848) agi unlinked bucket 8 is 306951432 in ag 9 (inode=19634304264) agi unlinked bucket 41 is 306923689 in ag 9 (inode=19634276521) agi unlinked bucket 10 is 361219338 in ag 8 (inode=17541088522) agi unlinked bucket 11 is 361219339 in ag 8 (inode=17541088523) sb_ifree 2411, counted 2418 sb_fdblocks 1978355686, counted 2005596978 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 8 - agno = 4 - agno = 7 - agno = 1 - agno = 9 - agno = 5 - agno = 6 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 3 - agno = 14 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 8603537169, moving to lost+found disconnected inode 8603537171, moving to lost+found disconnected inode 8603537172, moving to lost+found disconnected inode 8603537175, moving to lost+found disconnected inode 10840307678, moving to lost+found disconnected inode 10840307694, moving to lost+found disconnected inode 10840307709, moving to lost+found disconnected inode 16104191343, moving to lost+found disconnected inode 17541088522, moving to lost+found disconnected inode 17541088523, moving to lost+found disconnected inode 19634276521, moving to lost+found disconnected inode 19634304264, moving to lost+found disconnected inode 23968567848, moving to lost+found Phase 7 - verify and correct link counts... Maximum metadata LSN (1:3681890) is ahead of log (1:2). Format log to cycle 4. done
-
Tried without -n Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. tried mounting
-
Hello, I had a power failure this week end which my UPS took over and made a gracefull shutdown. I found out today that the drive was mounted read only with no disk access and once I shutdown the server, checked all connection and start it back, I was greeted with this: Now, I don't know what happen to the disk. It had no smart error and was working fine. If I "reformat" the drive, will the array rebuild itself and put the data back? At the sametime, a parity check started yesterday (parity ran each 1st of the month). Could that lead to problem (I cancelled the parity check)? I installed a new drive about 2 weeks ago so I know the parity should be good. Thank you edit: I've started the array in maintenance mode and ran xfs_repair -n that I saw under disk 6 option. This is the output Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... ir_freecount/free mismatch, inode chunk 11/346247680, freecount 2 nfree 1 finobt ir_freecount/free mismatch, inode chunk 11/346247680, freecount 2 nfree 1 agi unlinked bucket 40 is 346247720 in ag 11 (inode=23968567848) agi unlinked bucket 8 is 306951432 in ag 9 (inode=19634304264) agi unlinked bucket 41 is 306923689 in ag 9 (inode=19634276521) agi unlinked bucket 30 is 102889438 in ag 5 (inode=10840307678) agi unlinked bucket 46 is 102889454 in ag 5 (inode=10840307694) agi unlinked bucket 61 is 102889469 in ag 5 (inode=10840307709) agi unlinked bucket 17 is 13602577 in ag 4 (inode=8603537169) agi unlinked bucket 19 is 13602579 in ag 4 (inode=8603537171) agi unlinked bucket 20 is 13602580 in ag 4 (inode=8603537172) agi unlinked bucket 47 is 1071805807 in ag 7 (inode=16104191343) agi unlinked bucket 23 is 13602583 in ag 4 (inode=8603537175) agi unlinked bucket 10 is 361219338 in ag 8 (inode=17541088522) agi unlinked bucket 11 is 361219339 in ag 8 (inode=17541088523) sb_ifree 2411, counted 2418 sb_fdblocks 1978355686, counted 2005596972 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 5 - agno = 9 - agno = 14 - agno = 6 - agno = 7 - agno = 12 - agno = 3 - agno = 8 - agno = 1 - agno = 10 - agno = 11 - agno = 13 - agno = 4 - agno = 2 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 8603537169, would move to lost+found disconnected inode 8603537171, would move to lost+found disconnected inode 8603537172, would move to lost+found disconnected inode 8603537175, would move to lost+found disconnected inode 10840307678, would move to lost+found disconnected inode 10840307694, would move to lost+found disconnected inode 10840307709, would move to lost+found disconnected inode 16104191343, would move to lost+found disconnected inode 17541088522, would move to lost+found disconnected inode 17541088523, would move to lost+found disconnected inode 19634276521, would move to lost+found disconnected inode 19634304264, would move to lost+found disconnected inode 23968567848, would move to lost+found Phase 7 - verify link counts... would have reset inode 8603537169 nlinks from 0 to 1 would have reset inode 23968567848 nlinks from 0 to 1 would have reset inode 8603537171 nlinks from 0 to 1 would have reset inode 8603537172 nlinks from 0 to 1 would have reset inode 8603537175 nlinks from 0 to 1 would have reset inode 17541088522 nlinks from 0 to 1 would have reset inode 17541088523 nlinks from 0 to 1 would have reset inode 19634276521 nlinks from 0 to 1 would have reset inode 16104191343 nlinks from 0 to 1 would have reset inode 19634304264 nlinks from 0 to 1 would have reset inode 10840307678 nlinks from 0 to 1 would have reset inode 10840307694 nlinks from 0 to 1 would have reset inode 10840307709 nlinks from 0 to 1 No modify flag set, skipping filesystem flush and exiting. I tried manual mount in a temp folder, greeted with these error
-
Hello everyone, I'm currently on unraid 6.11.5 and I notice today that if I click on the log icon, it show an empty screen. If I go into /var/log, I can see syslog and syslog.1. The date on syslog is june 9 and syslog.1 is june 10. syslog.1 is 2.7mb and syslog is empty I feel something went wrong when syslog decided to do a log rotation and now I don't have any login anymore. I have ran /etc/rc.d/rc.rsyslogd restart and login started back, and looking at the message, it seems syslog was simply not running anymore