• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ademar's Achievements


Rookie (2/14)



  1. You can add a SOCKS5 proxy under settings in qBittorrent. If you can reach their IP/hostname from your local network (VPN running on your router for example).
  2. Correct. (I use a local domain name, set in Pi-hole. So i set something like
  3. I had an issue like this recently. I just needed to set the "APP_URL" container variable. See if your log says anything about an IP address, that's what tipped me off.
  4. For the record, I've had the same issue with the same device on 6.9.2.
  5. It was in /mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases But in my situation I couldn't even load the webui, so it might be a different issue.
  6. I also had a couple of broken containers. I would use WinSCP to go into the folders for those containers in appdata, and see if there are any files that have stricter permissions than other. For example some which are owned by "root" while the others is owned by other users. I don't know why upgrading would break this though.
  7. After upgrading from 6.9.2 to 6.10.0 the official Plex image no longer works. The container is giving the same sqlite error message over and over 16:38:38[s6-init] making user provided files available at /var/run/s6/etc...exited 0. 16:38:38[s6-init] ensuring user provided files have correct perms...exited 0. 16:38:38[fix-attrs.d] applying ownership & permissions fixes... 16:38:38[fix-attrs.d] done. 16:38:38[cont-init.d] executing container initialization scripts... 16:38:38[cont-init.d] 40-plex-first-run: executing... 16:38:38[cont-init.d] 40-plex-first-run: exited 0. 16:38:38[cont-init.d] 45-plex-hw-transcode-and-connected-tuner: executing... 16:38:38[cont-init.d] 45-plex-hw-transcode-and-connected-tuner: exited 0. 16:38:38[cont-init.d] 50-plex-update: executing... 16:38:38[cont-init.d] 50-plex-update: exited 0. 16:38:38[cont-init.d] done. 16:38:38[services.d] starting services 16:38:38Starting Plex Media Server. 16:38:38[services.d] done. 16:38:39Error: Unable to set up server: sqlite3_statement_backend::loadOne: attempt to write a readonly database (N4soci10soci_errorE) 16:38:39Stopping Plex Media Server. 16:38:39kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] 16:38:44Starting Plex Media Server. 16:38:44Error: Unable to set up server: sqlite3_statement_backend::loadOne: attempt to write a readonly database (N4soci10soci_errorE) 16:38:44Stopping Plex Media Server. 16:38:44kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] 16:38:49[cont-finish.d] executing container finish scripts... 16:38:49[cont-finish.d] done. 16:38:49[s6-finish] waiting for services. 16:38:49[s6-finish] sending all processes the TERM signal. 16:38:52[s6-finish] sending all processes the KILL signal and exiting. 16:39:19Container stopped "plex-first--run" is mentioned, maybe the container mistakenly believes this is the first time running? I do not see anything wrong in the syslog, or wrong file permissions. Have anyone else run into this problem? Edit: working again after giving everyone write access to db file.
  8. I updated from 6.9.2, got this message when starting the array: "Your flash drive is corrupted or offline. Post your diagnostics in the forum for help. See also here". Everything seems to be working normal. Edit: Plex (official docker container) is not working after the update. Container log is showing this every few seconds: Error: Unable to set up server: sqlite3_statement_backend::loadOne: attempt to write a readonly database (N4soci10soci_errorE) Stopping Plex Media Server. kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] Edit 2: scrutiny container is also having issues (not working at all, I won't post logs in this thread).
  9. I have 2 old drives, and I'm trying to decide if I should use them, or pension them. Let's say I have this setup: Parity: 8TB Parity 2: 4TB Disk 1: 4TB Disk 2: 4TB Disk 3: 4TB Disk 4: 4TB Parity 2 and Disk 4 is old, so what would happen if A) Parity 2 fails, B) Disk 4 fails. In situation A, I would want to just remove Parity 2 (then replace Disk 4 with a new 8TB disk). Is it possible to just remove Parity 2 after it have failed? In situation B, I would want to: Remove Parity 2, like in situation A. (As long as I don't loose any more disks, I should not loose any data). Replace Disk 4 with a new 8TB disk. Would it be possible to solve those two failure scenarios in the way described?
  10. 2. Yeah, no reason that should be a problem. The original post is very old, I see the UI is different now, do you see how you can add them to the same? 3. The 802.1Q standard allows for 4096 VLANs, I assume the limit in unraid isn't lower than that. I don't know anything about bridge assignments, that doesn't apply to VLANs on my machine (again, five year old post).
  11. I wanted to change the MAC on a Windows 10 VM, but after changing it (in "form view") I see in a packet capture on my router that It's still using the old address. Is this a known bug? I should mention that at some point I have probably changed MAC, then restored the VM from an old backup by copying over the vdisk1.img file. I don't know if this matters.
  12. Thanks for the tip, I ended up temporary moving all the data to the array, and building the cache from scratch.
  13. Originally, I had just one 256GB cache. At some point I added another 500GB, and everything automagically got copied over from the original cache drive, to form a mirror. Now, I have installed another 500GB SSD, and want to replace the 256GB drive. (So "Cache" will be the new 500GB, "Cache 2" is the old 500GB, they will still mirror each other). With the array stopped, I selected my new 500GB drive as "Cache" (replacing the existing 256GB), and assumed all data would be automatically copied over from my existing 500GB "Cache 2" drive. But when I started the array, both cache drives had the status "Unmountable: Too many missing/misplaced devices". I've changed back to the original setup, and I still have all my data mirrored on the old drives. So the question is, what is the proper procedure to replace "Cache" 256GB with new 500GB drive? (I'd like to just mention another curiosity, after I got the "Unmountable" warning, I changed the selected "Cache" drive from my new 500GB, back to 256GB, and I got a warning saying all data on 256GB would be overwritten. But when I changed selected "Cache" drive to "no drive", then back to 256GB, the warning disappeared, and no data was overwritten. The fact that I first got this warning, then it disappeared sounds like a tiny bug perhaps? I had the old 500GB selected as "Cache 2" all the time while doing this.)
  14. Really? Is that the 3rd time this year there is some kind of WEBUI related issue? I'm just going to stop updating this.