-
Posts
62,184 -
Joined
-
Last visited
-
Days Won
658
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by JorgeB
-
-
Check filesystem on disk11, run it without -n, then reboot to clear the logs and post new diags during a parity check.
-
There's no parity check going on with those diags.
-
Syslog in the diags starts over after every boot, enable the syslog server and post that after a crash.
-
You would need to manually delete and recreate the recovery partition to be next to the main one, you can do that with diskpart.
-
Are you using any powersaving tools?
-
There shouldn't be any difference using SAS or SATA for parity, try to use your fastest sequential read/write drive, this is usually also the largest, just don't recommend SMR drives.
-
If you create a new docker image and add a couple of new containers, differentfrom the ones you are using, and without restoring the old ones initially, do you see the same?
-
Yes, with xfs_repair it's not always easy to see if something was done or not, unless you manually check the exit code, but if there was an issue detected it should now be fixed, reboot to clear the log and keep an eye on it for more errors like the one above.
-
8 minutes ago, saltz said:
But my array is able to start again....
That's good, I don't know what was causing the md crashing, but I've never seen that before, possible something specific you did.
17 minutes ago, saltz said:And try to read the data on my old bad disk to see if and what I can restore from it. Does that make any sense?
It does, most times the disk are not completely dead, and maybe the disk itself is OK.
- 1
-
Run it again without -n or nothing will be done.
As for disk4, if the issue was a week ago and there aren't diags from that time, not much we can see now, if it happens again post diags at that time.
-
Should be fine, don't forget to backup the flash drive before, just in case.
-
1 hour ago, saltz said:
Is there any way I can revert to my configuration before the parity swap procedure?
You can try:
-Tools -> New Config -> Retain current configuration: All -> Apply
-Check all assignments and assign any missing disk(s) if needed, including the old parity2 disk, you also need a spare disk to assign as disk3, it can be only temporarily for now, that disk should be same size or larger than the old disk3.
-IMPORTANT - Check both "parity is already valid" and "maintenance mode" and start the array (note that the GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the checkbox, but it won't be as long as it's checked)
-Stop array
-Unassign disk3
-Start array (in normal mode now) and post new diags.
-
59 minutes ago, docbillnet said:
Meaning if say I upload 500 GB to my 1 TB drive today, and the next day do the same thing,
Usually you set the mover to run overnight to move everything to the array, or was that time not enough? It should have been for 500GB
-
Which disk was replaced?
Disk1 needs a filesystem check:
May 10 21:00:02 Tower kernel: XFS (md1p1): Free Inode BTree record corruption in AG 1 detected!
-
Disk4 also getting stuck is kind of expected, but the emulated disk3 not mounting is not, meaning something in the process didn't went well, I assume disk3 was also xfs? Try starting the array in maintenance mode and check filesystem on that disk, set it to xfs first, because if that disk is not mounting no point in rebuilding anyway.
-
Many small files take much longer to transfer
-
I assume it's going to be the same, but before trying the new disk, set disk2 to btrfs as well, unassign disk3, and start the array, this to see if the emulated disk3 is mounting, and if it is, I would expect it will also crash, but post diags after anyway.
-
Post current diags.
-
If you have the appdata and the user-templates folder you can recreate the docker image:
https://docs.unraid.net/unraid-os/manual/docker-management/#re-create-the-docker-image-file
Also see below if you have any custom docker networks:
https://docs.unraid.net/unraid-os/manual/docker-management/#docker-custom-networks
-
First post a screenshot of main to see the current array status
-
Emulated disk1 is mounting, if contents look correct you can rebuild on top and sync parity at the same time:
https://docs.unraid.net/unraid-os/manual/storage-management#rebuilding-a-drive-onto-itself
- 1
-
Please note that visualizing Unraid is not officially supported, but post the diagnostics.
-
You can do a new config: Tools - New config - Preserve all
Then assign disk3 and check "parity is already valid" before array start, still a good idea to run a parity check after, and if you started the array without disk3 there will be some sync errors.
-
The filesystem should be fixed now, reboot to clear the logs.
Don't recommend using -L unless xfs_repair asks for it.
- 1
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed
in General Support
Posted
Try recreating the docker image:
https://docs.unraid.net/unraid-os/manual/docker-management/#re-create-the-docker-image-file
Also see below if you have any custom docker networks:
https://docs.unraid.net/unraid-os/manual/docker-management/#docker-custom-networks