Soulflyzz

Members
  • Posts

    63
  • Joined

  • Last visited

About Soulflyzz

  • Birthday 04/09/1985

Converted

  • Gender
    Male
  • Location
    Alberta, Canada

Recent Profile Visitors

1126 profile views

Soulflyzz's Achievements

Rookie

Rookie (2/14)

9

Reputation

2

Community Answers

  1. i fired up the server after running the check and now everything is running and there. thank you again unraid community for soling my issues.
  2. hello, I ran the check in maintenance mode without -n this is what I got Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
  3. The cause of the issues was a failing power supply on those six drives. I powered down the server, replaced the power supply, powered the server back up and could see all six drives, 2 were still unmoutable after starting the server. stopped the server, removed the drives from the server hoping to emulate and still said unmoutable after starting it back up. Zeroed the 2 drives, stopped the server and have added them back as new drives. they are currently rebuilding but still say unmoutable so I will try your recommendations in 24-48 hours when those drive are done rebuilding.
  4. no I don't this is all i have. after the rebuild they are still saying unmoutable. I am guessing I have lost all 12 TB's
  5. Diagnostics included Hello, In the last 2 weeks I have had Unmountable disks present: Disk 23 • ST6000VN001-2BB186_ZR14744R (sdad) Disk 24 • ST6000VN001-2BB186_ZR144E4P (sdah) crash I was able to rebuild them and it happened again. This also happened to me about 4-6 months ago and I was unable to correct this issue and purchased 4 brand new 6tb disks and started me bunch of my collection over. is there anyway someone can look at this and maybe give me an idea of what is going on. towerofterror-diagnostics-20231023-2122.zip
  6. That did not work for me. The only thing that did work was the step I provided earlier
  7. I was able to correct this issue myself. 1 - I created a new Plex Docker called "Plex2" then stopped that docker right after creation 2 - Aimed the original "Plex" docker to that appdata folder fired it up and a new "Plex2" appdata location. 3 - Fired up "Plex" docked and it ran correctly as if it was a brand new install. 4 - closed the "Plex" Docker then aimed my original "Plex" appdata folder back to the ordinal "Plex" appdata and it fired up and started working. 5 - Deleted the Plex2 docker. this also happened with Readarr and I was able to get it working the same way. Hope this can help someone in the future. *edits for spelling errors
  8. Hello, Running 6.12.2 with a few dockers clean install from about 2 weeks ago. Running Plex and went to add some Extra Perimeters for NVidia GPU pass through and the docker failed the update. Now when I try to run plex I get this error, I also get the same error Execution error Image can not be deleted, in use by other container(s) If I try to Delete it I get this error Execution error No such container I also tried to reboot the server and I got an unclean shutdown and the reboot took forever and did not fix the issue. I have attached a log in hope that it will help towerofterror-diagnostics-20230707-1659.zip
  9. how would o go about giving it more ram? I have an abundance and I would love to mitigate and bottle necks whenever possible.
  10. I have another question since the restart, I can open a new post if wanted. I see in the main dashboard that I have a ZFS bar that I assume is ZFS memory it is pegged at 90 - 100% all the time. Is there a way to give it more memory or is this going to be a bottle neck I will continue to struggle with.
  11. I was able to use trurl recommendations and got it running. but that tool would have been helpful.
  12. That's no good. Anything i can do you restore Unraid but still keep the data on my hard drives?
  13. thank you for your patience. tower-diagnostics-20230621-0843.zip
  14. Hello, Update I follow through with the recommendation i read on the post i linked in the first request. the sever boots up and I can get into Unraid GUI with the second option but the server will not load up on local host or the local ip. I cannot pull a diagnostic due to not having access to the Unraid menu system unless there is a way for me to do in in the CLI. Any direction on how I can get a fully restored Unraid without losing the data on my Array drives would still be helpful. And if I'm past the point of no return how to get started from ground 1 would also be helpful.