brambo23
-
Posts
40 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by brambo23
-
-
Happy to report that the data is back and the parity is rebuilding with no issues.
Appreciate all the help!
-
1 minute ago, trurl said:
You did this on the physical disks again? I ask because repairing the emulated disks is the usual method.
On the physical disks yes, I did not do anything with the emulated disks (that I know of)
2 minutes ago, trurl said:If the physical disks are mountable then yes. Do they have lost+found folders after this additional repair?
I repaired, mounted, and looked in the drive itself again, found no folder or entity titled "lost+found"
-
5 minutes ago, trurl said:
What exactly do you mean by this? If you starting rebuilding on the physical disks, you have already altered their contents and they are most likely unmountable now.
What I meant by is, I started the array and it immediately began the rebuild process. Out of fear I cancelled the process, and the drives were not mountable until I did a file system check. It repaired the issue again then I was able the mount the drives again.
It would appear the corruption is already logged in the parity
3 minutes ago, trurl said:Do you have another copy of anything important and irreplaceable? Parity is not a substitute for backups. Plenty of ways to lose data besides failed disks, including user error.
There's nothing terribly lost, mostly just time. I don't store important personal data on this array for that reason. If it were to die, all I would of lost was time. My goal is to save myself time if at all possible. I built this server many years ago but when it comes to the details of how everything works, I will admit I am not the most knowledgeable (if that isn't already apparent).
7 minutes ago, trurl said:Rebuild makes the physical disks have the exact same contents as the emulated disks. That is all it can do.
So having said that, it seems if i want to keep the data and the fixed drives, the best move is to reset the array configuration?
-
Just now, trurl said:
And you must allow it to rebuild parity, because parity is out-of-sync with the changes filesystem repair made to the physical disks.
So they will remain unmountable until the rebuild finishes?
-
Judging by all the information I'm reading, it sounds like resetting the array configuration is the only way to make sure the data on the two drives remain intact
-
11 minutes ago, trurl said:
Are there lost+found folders on the physical disks?
I do not see any lost+found folders on the physical disks in question
i tried re adding the drives and "rebuilding" the array, but when i started the array, the disk 8 and 9 were still listed as unmountable
-
3 minutes ago, trurl said:
I never did post a link about that in this thread.
You are correct, you didn't post a direct link, but it was in the contents of the one page you did link
https://docs.unraid.net/unraid-os/manual/storage-management/#rebuilding-a-drive-onto-itself
-
7 hours ago, trurl said:
That is exactly what I mean by "reassign".
It will rebuild them with the contents of the emulated disks, which were unmountable.
so the second problem with that is i realized the emulated data is missing the data that's on those physical disks
what happens if i rebuild the array with that status? will it rebuild the array as emulated then the data from the physical disks gets added? or the physical disks look more like the emulated disks and i lose that data anyway?
-
So I looked at the link you sent about rebuilding data onto itself.
The emulated data is missing the data that is on those drives.
Should I copy the data off those drives in order to properly rebuild the array? I assume if I add those drives back, since the emulated data is missing the data on those drives, it will blank out the data
-
3 minutes ago, trurl said:
Can you actually see your data on each of the disks?
If you reassign them it will want to rebuild. You will have to New Config them back into the array and rebuild parity.
https://docs.unraid.net/unraid-os/manual/storage-management/#reset-the-array-configuration
I can see data on the disks.
Do I have to reassign them? Can i just add them back to the disk they were originally assigned to?
-
2 minutes ago, trurl said:
Do it again without -n. If it asks for it use -L
I ran it again (didn't change any flags) now it says no data corruption detected
QuoteFS: xfs
Executing file system check: /sbin/xfs_repair -n '/dev/sdc1' 2>&1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 4
- agno = 7
- agno = 8
- agno = 10
- agno = 5
- agno = 2
- agno = 9
- agno = 12
- agno = 6
- agno = 3
- agno = 11
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
No file system corruption detected! -
So after this test, I mounted and unmounted the drives (unassigned) and it was able to read the amount of data on the drive, I ran the file system check on both drives and now both of them say no file system corruption detected.
Also I did buy and install a new card based on the recommendation of @trurl
should I be ok to mount these drives back in the array and try to start the array again? -
On 3/15/2024 at 6:18 AM, JonathanM said:
Pretty sure you can, I can't think of any changes that would effect your ability to recover.
So i upgraded, installed
On 3/14/2024 at 8:51 PM, trurl said:Check filesystem on each of those disks, using the webUI. Capture the output and post it.
For drive in SDC (252MG) :
QuoteFS: xfs
Executing file system check: /sbin/xfs_repair -n '/dev/sdc1' 2>&1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used. Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
- scan filesystem freespace and inode maps...
sb_fdblocks 3210815618, counted 3234651569
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 3
- agno = 8
- agno = 10
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 11
- agno = 9
- agno = 2
- agno = 12
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
File system corruption detected!For drive in sdb (RD0B)
QuoteFS: xfs
Executing file system check: /sbin/xfs_repair -n '/dev/sdb1' 2>&1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 4
- agno = 5
- agno = 9
- agno = 11
- agno = 12
- agno = 7
- agno = 8
- agno = 3
- agno = 6
- agno = 2
- agno = 10
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
No file system corruption detected! -
3 hours ago, JonathanM said:
Pretty sure you can, I can't think of any changes that would effect your ability to recover.
well, i'll give it a shot then
-
So I'm currently running 6.9.2 and tried to install your latest version and it requires 6.11. is there any chance you have an old version I can use to get my array back up and running? I'm not seeing it in community apps or your github
-
26 minutes ago, JonathanM said:
Did you install the Unassigned Devices plugin?
No i did not.
Currently running version 6.9.2 of unraid and the one I just downloaded is for 6.11.0 and up.
I'm going to see if I can find a version compatible with 6.9.2
i highly doubt i can upgrade unraid in this state
-
1 hour ago, trurl said:
Check filesystem on each of those disks, using the webUI. Capture the output and post it.
I'm assuming this means it's not mounted right?
-
1 hour ago, itimpi said:
The disabled drives may actually be fine - have you tried unassigning them and then seeing if it mounts in Unassigned Devices?
I haven’t tried anything yet. I’ve been busy the last few days and haven’t really had time to address it. Hoping to read up when I can then have a plan of attack. So all of this is helpful -
1 minute ago, itimpi said:
NO.
The drives are not 'needing' a format' They are needing their corrupt file system to be repaired. If you attempt to format the drives with them disabled it will simply format the 'emulated' drives to contain an empty file system and update parity to reflect that so in effect you wipe all your data, A format operation is NEVER part of a data recovery action unless you WANT to remove the data on the drives you format.
Understood. So if I replaced those drives with NEW drives. It would repopulate the data correct?
-
8 hours ago, itimpi said:
Since the drives are currently marked as 'disabled' then Unraid has stopped using them and should be emulating them. You can see if the process for handling unmountable drives in the online documentation accessible via the Manual link at the bottom of the Unraid GUI works for the emulated drive.
If not there is a good chance that all (or at least most) of the contents can be recovered from the physical drives.
So out of curiosity,
since there are two drives that are needing a reformat. If I ended up just reformatting those, wouldn’t the parity restore the data to those drives?
-
19 minutes ago, trurl said:
Awesome. I’ll take a look at that asap.
about the 2 drives. Is there any hope to restore the drive into the raid? Or do I just have to take the loss and reformat them?
-
7 minutes ago, JorgeB said:
Yes, it has a SATA port multiplier.
So what would you recommend to add additional SATA ports to my system? The card used to be a recommended on for unraid.
-
3 hours ago, JorgeB said:
I would really recommend avoiding SATA port multipliers:
Mar 13 04:02:57 LLNNAS1337 kernel: ata4.15: Port Multiplier detaching Mar 13 04:02:57 LLNNAS1337 kernel: ahci 0000:03:00.0: FBS is disabled Mar 13 04:02:57 LLNNAS1337 kernel: ata4.01: disabled Mar 13 04:02:57 LLNNAS1337 kernel: ata4.02: disabled Mar 13 04:02:57 LLNNAS1337 kernel: ata4.03: disabled Mar 13 04:02:57 LLNNAS1337 kernel: ata4.04: disabled Mar 13 04:02:57 LLNNAS1337 kernel: ata4.00: disabled
If detached and dropped all connected disks, reboot and post new diags after array start.
I assume that's talking about my pcie sata extension card?
-
Hello,
I woke up this morning and found some of my dockers weren't working this morning. However, I was busy with work so I couldn't get to it in the evening, when I looked at my unraid server I saw that my most recent 3 drives were listed as missing from the raid (the ones from the first screenshot.
I have a 12 drive array with 2 parity drives. I have a Silverstone SST-CS308B case with a startech 3 drive hotswap bay with a PCIE drive extension card (not sure which one but I know i bought it from the known working list over 5 years ago).
The 3 drives in question are in that 3 drive hotswap bay. I disabled the array (downloaded the diagnostics) and tried to get the drives detectable again but with no avail. I rebooted the server and now 1 of the drives are usable, but 2 are now listed as detected but unmountable.
They've been in the system since December or so, they don't have too much data but i want to see if there is a way to make them mountable at this point.
Nextcloud 404 error after upgrade from 6.9.2 to 6.12.8
in General Support
Posted
I recently upgraded my unraid server to 6.12.8 from 6.9.2 due to other issues.
After the upgrade it seemed the docker settings slightly changed (i also accidentally upgraded to the latest version when I was on 22.2.2, but updated the tag to the version i'm currently running.) The port switched back to 443 (default port). I had changed it back to the previous setting port 4445 (noted in the config file below).
I did run a new permissions as i found an issue with my plex server and upgrading past 6.10.
Since the upgrade my nextcloud is now returning a 404 error on any page I visit.
I'm struggling to find any additional clues on where to look to find the problem.
The docker logs show:
but then later i went to look at it and saw this repeatedly
The port on the docker container matches the config file
The access logs show regardless of where i go (nextcloud/log/ngix/access.log)
Any tips on where I can find the source of the problem here?