Jump to content

PrisonMike

Members
  • Posts

    37
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

PrisonMike's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Hello, I have backed up what I can from the cache drive to my windows PC. However, I don't understand why I am not able to just remove the bad drive and replace it with a new one. I have two SSDs in the cache pool that should have been mirroring each other. Are both drives corrupted? tower-diagnostics-20240803-1604.zip
  2. OK, I will try to copy each container by itself. Thanks for the heads up!
  3. Hmm, I see. Maybe I am going about backing up the pool in the wrong way, I also have the dynamix file manager plugin but that hangs on binhex-krusader when I try to copy the appdata from "/appdata" to "/mainshare"
  4. Well, I'm about ready to pull my hair out. I cant get docker service to start so I can run Krusader, I cant get windows to play nicely so I can copy appdata folder to my desktop on my PC. Basically my appdata folder is trapped and cant be moved anywhere. Maybe there is some kind of permissions issue maybe I have never had this problem before with my unraid server and its very frustrating. Scrub and balance abort themselves when I try and start them. Nothing seems to be working.
  5. Hello, I found more errors when I checked using the monitoring guide you provided. I'm thinking that this drive is failing. It is still in warranty since I only bought it in Jan 2023. I attempted to convert the cache to single mode and was able to mount the "good" drive. I tried to grab the appdata folder and back it up to my personal PC but I get a windows error that the network drive is unavailable. Is it OK to run dockers as normal while only having one cache disk since the other one will be removed and returned for warranty? *EDIT* fix common problems says the single drive is now in read only mode. tower-diagnostics-20240730-0918.zip
  6. Thanks! I followed the guide and reset the errors. FYI, before I reset the errors the output was: root@Tower:~# btrfs dev stats /mnt/cache [/dev/sdg1].write_io_errs 14579960 [/dev/sdg1].read_io_errs 10433665 [/dev/sdg1].flush_io_errs 48340 [/dev/sdg1].corruption_errs 1151794 [/dev/sdg1].generation_errs 0 [/dev/sdf1].write_io_errs 23172219 [/dev/sdf1].read_io_errs 57409 [/dev/sdf1].flush_io_errs 546115 [/dev/sdf1].corruption_errs 3 [/dev/sdf1].generation_errs 0 Side note, should I be scrubbing or balancing on some kind of schedule? This is the first time since using unraid (4-5 years) I ran into this problem. Thanks again!
  7. Hello, I was able to restart the array and both drives mounted Scrub returned the following: UUID: f77815f4-da02-4c6a-a168-4f1d83b1ce1c Scrub started: Sun Jul 28 06:59:39 2024 Status: finished Duration: 0:07:51 Total to scrub: 409.95GiB Rate: 891.28MiB/s Error summary: no errors found
  8. My mistake, I couldn't remember to get them before or after array start. I attached them to this post. Thanks! tower-diagnostics-20240727-1628.zip
  9. *Update*: After posting the diagnostics I started the array and now the cache disks are saying "Unmountable: Unsupported or no file system"
  10. Hello, I am looking for some help with my docker containers. I recently had some issues with my docker containers and decided to delete the image. After I deleted the image, some of the containers start and some done. If I manually start them, I get an error 403. I did some searching and I think it may have something to do with logs somehow but not too sure. I have attached my diagnostics. Thanks in advance. tower-diagnostics-20240727-1026.zip
  11. Thanks so much for your assistance! This worked after I restarted, everything appears to be working normal now!
  12. Hello, here is the result: root@Tower:~# gdisk /dev/sdb GPT fdisk (gdisk) version 1.0.9.1 Caution: invalid main GPT header, but valid backup; regenerating main header from backup! Warning: Invalid CRC on main header data; loaded backup partition table. Warning! One or more CRCs don't match. You should repair the disk! Main header: ERROR Backup header: OK Main partition table: OK Backup partition table: OK Partition table scan: MBR: protective BSD: not present APM: not present GPT: damaged **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** Command (? for help): Should I repair the GPT table? How is this conducted. Thanks!
  13. Hello, it is still sdb. Please see the outputs below /dev/sdb: PTUUID="1616b127-e5f5-4f58-a5ac-0ee54460088b" PTTYPE="gpt" and The primary GPT table is corrupt, but the backup appears OK, so that will be used. Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: WDC WD40EFRX-68N Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1616B127-E5F5-4F58-A5AC-0EE54460088B Device Start End Sectors Size Type /dev/sdb1 64 7814037134 7814037071 3.6T Linux filesystem Thanks again
  14. My apologies, I didn't realize that I had to do them after the array started. I have attached the new diagnostics for your review. Thanks again in advance. tower-diagnostics-20240711-0757.zip
  15. JorgeB, hello and thanks for your assistance. I have attached my diagnostic zip file to this post. tower-diagnostics-20240619-1116.zip
×
×
  • Create New...