Jump to content

zerosk

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by zerosk

  1. I swear I had tried that at some point already but it worked either way. I copied my files back over and my containers and VMs booted. Thanks!
  2. root@HP-Gaming:~# blkid /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="6be85193-01" /dev/loop1: TYPE="squashfs" /dev/sdf1: UUID="b04a38f7-63f4-486c-a90a-08cd9d4119c6" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="dcf35c91-a2e8-4d9a-a8bc-1b819d27eddf" /dev/nvme0n1p1: UUID="1b8e3249-041f-466c-9e04-c95c6c17b5b4" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="fce6d4e1-8e1c-44d9-a04d-86f069e5e0cb" /dev/md2p1: UUID="b04a38f7-63f4-486c-a90a-08cd9d4119c6" BLOCK_SIZE="512" TYPE="xfs" /dev/sdb1: UUID="04835ac0-2973-455d-a406-d4ed8fb99400" UUID_SUB="e76222c1-f23c-4fc9-b422-95c095992f02" BLOCK_SIZE="4096" TYPE="btrfs" /dev/md5p1: UUID="55230ea4-74dc-4740-83a2-3d9171da7df7" BLOCK_SIZE="512" TYPE="xfs" /dev/sdk1: UUID="ea509c56-6f25-47e5-b35e-9f4f98548024" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="5540aa67-bc96-46b1-8407-e36a1dfe8e3b" /dev/sdi1: UUID="ca9e0c5d-302e-4597-a02b-42ae8d8e9154" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="b6fece0f-3e5f-4f29-8fab-c74151e259e6" /dev/md1p1: UUID="ea509c56-6f25-47e5-b35e-9f4f98548024" BLOCK_SIZE="512" TYPE="xfs" /dev/sdg1: UUID="7dea7d68-2878-484f-b3f4-3cb398dcc0a4" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="0cbd7835-c6a0-40ef-ab26-aa8896a94239" /dev/md4p1: UUID="5bd1dcbf-91cd-4864-bc43-d4ddf5784ee0" BLOCK_SIZE="512" TYPE="xfs" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="5bd1dcbf-91cd-4864-bc43-d4ddf5784ee0" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="88b2238f-773a-4b7e-854a-a57a7d066d38" /dev/md7p1: UUID="f4677b64-d1f3-48f1-afb2-ce5c22dfa7c7" BLOCK_SIZE="512" TYPE="xfs" /dev/sdj1: UUID="17fb7ceb-2065-4584-99d8-ce8fb63a5cc2" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="8aba17f2-ed9f-40ac-b902-885c03a868e8" /dev/md3p1: UUID="ca9e0c5d-302e-4597-a02b-42ae8d8e9154" BLOCK_SIZE="512" TYPE="xfs" /dev/sdh1: UUID="f4677b64-d1f3-48f1-afb2-ce5c22dfa7c7" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="506489a0-25cb-4164-809c-f1a8c385b3b1" /dev/md6p1: UUID="7dea7d68-2878-484f-b3f4-3cb398dcc0a4" BLOCK_SIZE="512" TYPE="xfs" /dev/loop2: UUID="c3ae25a2-ce33-4b32-ae52-b43b9a566c6a" UUID_SUB="6a796dee-93c5-4391-b74f-f5ba400eaf1e" BLOCK_SIZE="4096" TYPE="btrfs" /dev/loop3: UUID="f945752b-d159-4ae0-9c1e-29966aa70efc" UUID_SUB="6c835a9b-108c-4888-8d32-ed486fba0452" BLOCK_SIZE="4096" TYPE="btrfs"
  3. Running Unraid 6.12.11 Main array pool is 8 drives (7 data, 1 parity) Cache pool is 2x 1TB btrfs RAID1 Parity drive is 16TB, all data drives are 8-14TB Drive 5 is 14TB, and is failing. It eventually got disabled by Unraid and its contents are being emulated. I have a new 16TB installed as a replacement, using the same SATA and power cable (I want to see if the 14TB is actually bad or cable/port issue). I am also having issues with one of my cache drives at the same time. Lots of CRC errors (>20000), and I believe its causing stability issues with the system on top of the fragile emulated state. I am getting lots of docker service tipping over, or errors starting containers from a stopped state. Also having roughly 50% chance that array will fail to stop gracefully and I have to reboot the system. Lots of btfrs error in system log about the cache drive. I have tried to fix this myself by reading the unraid docs, some forum posts and the FAQ around dealing with cache disks, but I am not getting the desired result, and feel nervous that I will cause further damage or data loss if I am not careful. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/#comment-700582 https://forums.unraid.net/topic/89115-unmountable-too-many-missingmisplaced-devices-every-time/ https://forums.unraid.net/topic/141403-cannot-start-array-wrong-pool-state-cache-too-many-missingwrong-devices/ Desired solution: - Run my cache pool as a single drive, on the remaining good 1TB SSD. - Run the array in an emulated state while I wait for a preclear to complete on the new 16TB replacement disk 5. - Maintain system stability (no docker crashes) until disk 5 can be properly integrated to the array. - Once replacement 1TB SSD arrives in a few days, add that back in as a mirror. I have already mounted the good 1TB disk to /temp and then used mc to copy all files on to a designated folder on Disk 7. At a glance, I believe all my data in intact. I have backups of the important stuff as well if needed. I then attempted to start the array with cache pool set to 1. It started array but told me cache needed to be formated. I checked off Yes and tried to format. It gives an error "Unmountable: Unsupported or no file system". I then stopped the array and erased + precleared the good 1TB SSD, and rejoined it to the cache pool. Started up, still get "Unmountable: Unsupported or no file system". I removing all drives, setting the pool slots to 0, and then creating a new pool named "cache" instead of "Cache", and I still get "Unmountable: Unsupported or no file system". Could someone please provide guidance on what the best course of action is? diagnostics-20240816-2140.zip
  4. I resolved this by removing the Homeassistant VM, but NOT the disk (screenshot settings or copy your xml config before hand). Then create a new VM, but point it's primary vdisk to the existing qcow2 image. The VM should boot properly upon being created.
  5. I updated to version 2.9.14 today and now I can't log in to my container. To be fair, I haven't tried logging in for a while (about a month), so it's possible it wasn't due to this update. When I click log in after entering my credentials, I get a "Bad Gateway" error just below the password field. I checked the logs and it's a constant stream of [1/2/2022] [1:03:59 PM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [1/2/2022] [1:03:59 PM] [Migrate ] › ℹ info Current database version: none [1/2/2022] [1:03:59 PM] [Global ] › ✖ error certificate.meta.dns_provider_credentials.replaceAll is not a function [1/2/2022] [1:04:00 PM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [1/2/2022] [1:04:00 PM] [Migrate ] › ℹ info Current database version: none Over and over again. The container still seems to be working because it's proxying my stuff correctly.
  6. Repoman you god damn legend. I've been trying to get this going for a few weeks now and I was so close. Was just missing editing the setting in qbittorrent, for some reason I was overlooking that and too focused on the Radarr4K container. This worked perfectly for me, thanks for taking the time to come post your solution in such detail.
  7. I'm having similar issues since updating the container last night. It just won't connect at all. Nothing will upload or download. But when I run "curl ifconfig.io" in the console it shows me the correct VPN IP and there's nothing that stands out to me in the logs. Previously when this happened I could get around it by randomizing the port it uses in the settings but that doesn't seem to be working now either. I've tried rebooting the container, as well as shutting it down for a bit and then back on. I've tried force updating the container as well, no change. The weird thing is I see the DHT nodes counter at the bottom going up after I reboot it. So it seems to be communicating, just won't run any torrents. I haven't made any other unraid/network changes since updating the container. EDIT: I let the container sit there for 15 minutes and it seemed to start working, albeit quite slowly. Only a couple of my torrents have downloaded the metadata and even then they are running quite slowly. But when I turn on my PIA VPN from another computer on the same network and connect to the same node, I can run a speedtest and get 700 down/350 up. I don't know why it's behaving so strange. EDIT 2: Things appear back to normal after about 30 minutes. Potentially just VPN problems or issues with the specific torrents in question maybe
  8. I'm struggling to get 2 copies of this container running at the same time through a VPN container. I want to have a Radarr and Radarr 4K containers so I can get 2 copies, one for LAN and one for when I'm watching externally from my network. I've posted more details here: https://www.reddit.com/r/unRAID/comments/ln40gk/help_with_2_radarr_containers_for_4k/ I don't know if I'm doing something wrong, or maybe it's the containers I'm using (Binhex-qbit VPN) but I can't seem to get both running through VPN properly. Can anyone help me or at least tell me if I'm wasting my time?
×
×
  • Create New...