prettyhatem
Members-
Posts
19 -
Joined
-
Last visited
Converted
-
Gender
Undisclosed
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
prettyhatem's Achievements
Noob (1/14)
0
Reputation
-
Unmountable: Unsupported or no file system
prettyhatem replied to prettyhatem's topic in General Support
Okay started the array back up and it looks like it is seeing the disk just fine now. Here are the new set of diags. fileserver-diagnostics-20240305-1327.zip -
Unmountable: Unsupported or no file system
prettyhatem replied to prettyhatem's topic in General Support
Okay ran it with -L output: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata sb_fdblocks 297972367, counted 300647576 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 inode 15764878235 - bad extent starting block number 4503567551346641, offset 0 correcting nextents for inode 15764878235 bad data fork in inode 15764878235 cleared inode 15764878235 - agno = 8 - agno = 9 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 3 - agno = 0 - agno = 5 - agno = 2 - agno = 9 - agno = 4 - agno = 6 - agno = 8 - agno = 7 entry "s_icejumper_attack_spike_02.uasset" at block 0 offset 3624 in directory inode 15764878092 references free inode 15764878235 clearing inode number in entry at offset 3624... Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... bad hash table for directory inode 15764878092 (no data entry): rebuilding rebuilding directory inode 15764878092 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (30:1702450) is ahead of log (1:2). Format log to cycle 33. done -
Unmountable: Unsupported or no file system
prettyhatem replied to prettyhatem's topic in General Support
I let the parity check finish and unmounted and remounted in maintenance mode.I ran xfs_repair with this log: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. I am assuming I should follow the instructions? Start the array out of maintenance mode and stop it and re run the repair? -
I had some odd things happening on my Unraid server, I have physically installed a new disk and was doing a Preclear on it. At about 90% it paused without it being able to progress. When looking at the UI, I would often have timeouts and it wouldn't fully populate the docker list. I attempted to stop the array but it looked like it was stalled on stopping docker. I attempted to kill the docker containers manually but that didnt work. At some point I decided I should just force restart the server. It came back up and I started the array. A parity check started as it was an unclean shutdown. Now I am noticing disk 5 is showing "Unmountable: Unsupported or no file system". I have yet to add the new disk to the array, but now I am unsure how to proceed. Do I need to stop the parity check and unmount the drive and do a filesystem check of some sort? EDIT: I am just now noticing that all of my Docker containers have "not available" under all of their Versions. Appreciate any advice! fileserver-diagnostics-20240304-1637.zip
-
[Support] Eurotimmy - RomM (ROM Manager) by zurdi15
prettyhatem replied to Eurotimmy's topic in Docker Containers
I am stuck on the exact same step. -
Updated to 6.12.4 UI Stops working after starting array
prettyhatem replied to prettyhatem's topic in General Support
I think this is due to my cache, doing a `zpool status -v` I see a corruption. These ssd's might be dying. ``` pool: cache state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A config: NAME STATE READ WRITE CKSUM cache ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdb1 ONLINE 0 0 31.1K sde1 ONLINE 0 0 31.1K errors: Permanent errors have been detected in the following files: /mnt/cache/docker-xfs.img ``` -
Updated to 6.12.4 UI Stops working after starting array
prettyhatem replied to prettyhatem's topic in General Support
Okay, so I fixed the shfs issue. I had to recreate my cache because it seemed like a corrupt filesystem. I have restarted in Safe Mode and now nginx is not crashing when starting the array! But I have had two hard locks on the machine now, I have included my diags. fileserver-diagnostics-20231026-1718.zip -
I just updated from 6.11 to 6.12.4 and after the initial reboot I can access the UI. After starting the Array the UI stops working. I can ssh in and see the array is up, shares are working and Docker is working. To test I restarted the server and this time started the Array with Docker disabled. Same results. I have included the diagnostics. fileserver-diagnostics-20231005-1039.zip
-
prettyhatem started following PCIe Passthru: Reporting incorrect video card model!
-
Dual Parity questions (potential failed drive)
prettyhatem replied to prettyhatem's topic in General Support
Gotcha, here is my diagnostics! Thanks! fileserver-diagnostics-20170608-1701.zip -
Dual Parity questions (potential failed drive)
prettyhatem replied to prettyhatem's topic in General Support
yeah I am about 90% sure, as this is a new drive and I just added it to the raid to replace a smaller parity drive. As soon as I added it I could hear the drive from another room.... Alright thanks for the info on the parity. -
I have a dual parity unraid setup going with 5 data drives. I think my second parity drive is bad, as I hear loud drive noises from it AND it is reporting hot at 46 degrees. I was wondering if it was possible to stop the raid, remove the bad drive, and turn it back on with a single parity drive going? I understand it will be in a more vulnerable state with single parity, but I want to remove this one for the next couple days when I will be getting the replacement drive...
-
PCIe Passthru: Reporting incorrect video card model!
prettyhatem replied to prettyhatem's topic in VM Engine (KVM)
Which version are you running? 6.1.9 or 6.2 beta? If you are running 6.2 there might be differences in the xml. I guess the configuration file you mention is the xml file for the vm? 6.2 Beta and correct, the XML is what I am referring too. -
PCIe Passthru: Reporting incorrect video card model!
prettyhatem replied to prettyhatem's topic in VM Engine (KVM)
Well I did as you mentioned and same results. I found a couple roms that fit the description but still same results. Also it seem like that documentation might be out of date? I was using SeaBIOS and the configuration file didnt appear like that, it looks like the OVMF based. So I created a new VM with each to see if that would help and still no go. -
PCIe Passthru: Reporting incorrect video card model!
prettyhatem replied to prettyhatem's topic in VM Engine (KVM)
Rom file? I am not entirely sure what that is. Yeah passing this to windows 10 doesnt seem to correct the reporting. -
PCIe Passthru: Reporting incorrect video card model!
prettyhatem replied to prettyhatem's topic in VM Engine (KVM)
I am still trying to troubleshoot this issue and I am out of luck. I was thinking of booting a WinPE thumbdrive off the system to see if windows reports the correct model. My worry is if that is a good idea, will booting off of WinPE thumbdrive somehow mess with my raid disks? Could windows somehow try to use them and mess something up? Or am I being stupid?