xoC
-
Posts
39 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by xoC
-
-
-
Sometimes, it releases the cpu for a few sec and then goes back to full blast. I can't even load a page from the web GUI when it happens.
-
Usually after 12+ hours of runtime, my CPU goes to 100% and seems to never go down. Every docker becomes unresponsive and I have to reboot. Sometimes, it doesn't even trigger the reboot as it is "too busy"...
This time I finally got the diagnostics when the CPU was at max and attached them to this post.
It does that since a recent unraid update, I think I was since 6.12.1 or something like that.
Thanks !
-
Ok, noted !
Thanks a lot for your quick help
- 1
-
Thanks.
I started the array It did mount indeed ! It directly started a re-build.
Do you think one or both disk are failing and should be replaced ?
Since I'm 2 down, the array is currently unprotected, I maybe should not try to rebuild if a disk is in an unproper shape.edit : no lost+found folder on either disk.
-
-
Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
Should I try with -L ?
-
So here it is for disk1, It had many errors (CRC) and yesterday I did a run with -n and then with no -n. Today it says :
Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 121721460, counted 125133027 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 1 - agno = 2 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.
Now for disk2 : did the same yesterday, loooot of errors with -n, tried with no -n and it didn't complete but I don't remember the error. Disk2 today's filesystem check is attached in txt format because way too long.
-
Thanks for your quick answer.
I thought the zip I posted in the first post was the diagnostics ?
-
-
Hello,
I have 2 disk which have failed. I'm kind of lost about what to do as all the usual links to the FAQ are broken.
I attached my diagnostics.
Even when I unselect both disk (empty) and starts the array, it is stuck and also show "mounting" on disk 2.
It does the same thing after trying to rebuild the array.
Thanks in advance.
-
On 3/26/2023 at 9:38 PM, mgutt said:
This probably does not respect hardlinks. Ask the plugin dev for this new feature if it's the case.
If you want to move all backups including the storage-friendly hardlinks, you need to use a command which supports this. There exists two:
Copy
cp --archive /mnt/disk3/sharename/Backups /mnt/disk5/sharename
Move
rsync --archive --hard-links --remove-source-files /mnt/disk3/sharename/Backups /mnt/disk5/sharename
find /mnt/disk3/sharename/Backups -depth -type d -empty -delete
Both create the "Backups" subdir in the destination, but rsync moves the files (and the additional find command removes the empty dirs from the source as rsync removes only transferred files and not directories).
Note: If you add " & disown" behind the command, the command runs in the background even if you close the terminal. This could be useful if the transfer takes a lot of time and you don't want to keep the window open all the time. Example:
rsync --archive --hard-links --remove-source-files /mnt/disk3/sharename/Backups /mnt/disk5/sharename & disown
Awesome, thanks a lot for your answer !
And on the actual files that got duplicated - instead of hardlinks - after my naive copy, is there a "search function" or something like that that could take care of that ? -
Hello,
I've setup the script quite a long time ago, with 2 dedicated disks, and they became full. I allowed the share to be on other disks at that time.
I've extended the size of my backup disk (new bigger disk) and just began naively copying (inside unraid GUI with the dynamix plugin) from theses other disks and I just stopped it since it seems to copy each file and make a new version with all the data and it was filling my new drive quite a bit. I had 80 GB to transfer and it already filled 550 GB on my new disk, and copy isn't finished.Keep in mind I'm totally a newb with file transfer, rsync and all that, so how could I :
1) migrate the share on the wanted new disk
2) "delete" all copies which are just taking space multiples times for the same file.
Thanks a lot in advance !
2 disks have failed, and one of them stucks the array to "mouting" even when not here
in General Support
Posted
Hello back,
it's been a nightmare since then, everytime the parity sync runs, it finishes correctly, and then, one or two disk get disabled immediately.
I shut the server off one week ago because I had no time to investigate.
Yesterday, Parity 1 and Disk 1 were disabled. I changed sata cables for parity 1 parity 2 and disk 1 and ran a rebuild.
It completed this night and seen the logs, it disabled the disk 1 and disk 2, 20 min later.
Is it ok to continue on this topic or should I better open a non resolved topic ?
Attached are the diagnostic, server was not powered down since then.
nastorm-diagnostics-20230901-0912.zip