Soulflyzz

Members
  • Posts

    63
  • Joined

  • Last visited

Everything posted by Soulflyzz

  1. i fired up the server after running the check and now everything is running and there. thank you again unraid community for soling my issues.
  2. hello, I ran the check in maintenance mode without -n this is what I got Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
  3. The cause of the issues was a failing power supply on those six drives. I powered down the server, replaced the power supply, powered the server back up and could see all six drives, 2 were still unmoutable after starting the server. stopped the server, removed the drives from the server hoping to emulate and still said unmoutable after starting it back up. Zeroed the 2 drives, stopped the server and have added them back as new drives. they are currently rebuilding but still say unmoutable so I will try your recommendations in 24-48 hours when those drive are done rebuilding.
  4. no I don't this is all i have. after the rebuild they are still saying unmoutable. I am guessing I have lost all 12 TB's
  5. Diagnostics included Hello, In the last 2 weeks I have had Unmountable disks present: Disk 23 • ST6000VN001-2BB186_ZR14744R (sdad) Disk 24 • ST6000VN001-2BB186_ZR144E4P (sdah) crash I was able to rebuild them and it happened again. This also happened to me about 4-6 months ago and I was unable to correct this issue and purchased 4 brand new 6tb disks and started me bunch of my collection over. is there anyway someone can look at this and maybe give me an idea of what is going on. towerofterror-diagnostics-20231023-2122.zip
  6. That did not work for me. The only thing that did work was the step I provided earlier
  7. I was able to correct this issue myself. 1 - I created a new Plex Docker called "Plex2" then stopped that docker right after creation 2 - Aimed the original "Plex" docker to that appdata folder fired it up and a new "Plex2" appdata location. 3 - Fired up "Plex" docked and it ran correctly as if it was a brand new install. 4 - closed the "Plex" Docker then aimed my original "Plex" appdata folder back to the ordinal "Plex" appdata and it fired up and started working. 5 - Deleted the Plex2 docker. this also happened with Readarr and I was able to get it working the same way. Hope this can help someone in the future. *edits for spelling errors
  8. Hello, Running 6.12.2 with a few dockers clean install from about 2 weeks ago. Running Plex and went to add some Extra Perimeters for NVidia GPU pass through and the docker failed the update. Now when I try to run plex I get this error, I also get the same error Execution error Image can not be deleted, in use by other container(s) If I try to Delete it I get this error Execution error No such container I also tried to reboot the server and I got an unclean shutdown and the reboot took forever and did not fix the issue. I have attached a log in hope that it will help towerofterror-diagnostics-20230707-1659.zip
  9. how would o go about giving it more ram? I have an abundance and I would love to mitigate and bottle necks whenever possible.
  10. I have another question since the restart, I can open a new post if wanted. I see in the main dashboard that I have a ZFS bar that I assume is ZFS memory it is pegged at 90 - 100% all the time. Is there a way to give it more memory or is this going to be a bottle neck I will continue to struggle with.
  11. I was able to use trurl recommendations and got it running. but that tool would have been helpful.
  12. That's no good. Anything i can do you restore Unraid but still keep the data on my hard drives?
  13. thank you for your patience. tower-diagnostics-20230621-0843.zip
  14. Hello, Update I follow through with the recommendation i read on the post i linked in the first request. the sever boots up and I can get into Unraid GUI with the second option but the server will not load up on local host or the local ip. I cannot pull a diagnostic due to not having access to the Unraid menu system unless there is a way for me to do in in the CLI. Any direction on how I can get a fully restored Unraid without losing the data on my Array drives would still be helpful. And if I'm past the point of no return how to get started from ground 1 would also be helpful.
  15. Hello, I am looking to start Fresh-ish with 6.12 I currently updated and converted my 4 Pool / Cashe drives to ZFS but when I restored the data nothing came back I am not to worried because my system was becoming a mess and I was wanting a reason to start over. How do i go about doing that without losing all my data on my 28 Array drives with 80TB of storage. I followed this guide and post 5 but I think I messed up.
  16. Hello, I am having this same issue and I have found a temp solution. I went into the condole and deleted that file that was crashing the system. Then rebooted swag. everything came back online and is working for about 24 hours then that file came back and broke it again. have you or anyone sound a cause or solution to this issue yet? ***Before anyone spams me I have found the solution by continued reading in this tread. Delete if wanted or keep as a reference on please read before you post.
  17. Hello, I would love to give my feedback I am running a Dell r720 with 2x Xeon E5-2690 @2.9GHz 256G of ram 2060 Super I have the steam headless installed n app data on a btrfs cashe only drive and steam game folder running off a ntfs unassigned device Currently my testing is on Linux supported games only Binding of Issacs worked everywhere but Half-life 2 will only work when installed on a ntfs drive (Read this hole thread to figure that one out) but I still have other "Modern" games that will load then crash 2 examples ate Borderlands 2 and Halo infinite. Can anyone supply any idea what might be the cause why they will try to load then just crash?
  18. Hello, with the release of 6.11.0 I feel its time to join the 6.10.3, I am currently on 6.9.2 with no errors. I am running Docksocket, update assistant said something about not supporting it on 6.11.0 but it said it should work on 6.10. Is this true or should I just stay on 10.9.2 since everything is working perfect and not worth messing anything up? Thank you,
  19. Hello, Generic question, I am running blakeblackshear/frigate:stable-amd64nvidia and everything is working as I want it to with Frigate. But in homeassistantcore (Docker), when I try to Set up a new integration, i add the Frigate info IP address http://***.***.***.***:5000/ which i use to view Frigate and it comes up with "Failed to connect". I have read that the integration requires Version 10.*.* and this docker image is Version 9.*.* has anyone else ran into this issue or is it just me? Is there a known solution? Also i will add this integration did work at one time before that big update the broke frigate and I had to update the .yaml. Sorry if this was long winded but I wanted to supply as much info as I could.
  20. I tries to get this app working as well but failed every time. I'm running Unifi pro-4 Router running IDS and found the cause to my issues in my router Logs. It sees the traffic as a trojan here is one of the hundreds of blocks my router did. ET TROJAN Possible Compromised Host AnubisNetworks Sinkhole Cookie Value Snkz Date07/15/2021 Time08:56:30 AM Severity High TypeA Network Trojan was Detected CategoryIPS_VALUES_CATEGORY_EMERGING-TROJAN I am not accusing SpaceinvaderOne of anything! I ran this app 3 different time and my Router is full of these the hole time it was running. Hope this helps anyone else that is having issues and see if anyone else sees the same?
  21. Thank you for all the help I pushed the server over the weekend and could not make it crash. Thank you for the help correcting this issue.
  22. Hello, Update from last week. Have not had a single crash since the uninstall of the "sas sleep plugin" i am in the process of running another parity check will run a move after that is done to do the final test.
  23. I have uninstalled the sas spin down plugin. This morning after a fresh reboot. I will run it for 24 hours and let you know my results.
  24. *Update* I have disabled my 5 am Mover. I did a Maintenance mode repair -L on all 20 drives. Removed the second e-sas cord between the r720 and md1200 Started a parity Check *Forgot to disable spin down on the drives I woke up this morning with the parity check at 60% complete with only 2 errors on the first 10 drives. All my dockes look to be running but all my shares are missing again this morning. Also when I checked all the drives were spun down r720-diagnostics-20210630-0806.zip