s449

Members
  • Posts

    68
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

s449's Achievements

Rookie

Rookie (2/14)

5

Reputation

1

Community Answers

  1. Found a solution following the instructions in this answer: https://stackoverflow.com/a/46789939 However, I'm still anticipating they get reverted back. If they do, I'll try to report what might have caused it.
  2. Just got this running successfully, everything seems to work great. This may be a dumb question, but why doesn't this have an appdata folder? Do I need one? Is it not storing anything outside of the MongoDB database? I looked around the MongoDB database and yeah, the users and all the messages are stored there.
  3. This is mildly Unraid related, but my Firefox favicons usually work, but every once in a while they seem to get reverted to just the Unraid logo. Currently they look like this: Not sure why Radarr and Sonarr are okay. All of these do have Favicons, so I'm not sure why they're not updating. I don't do any reverse proxy, so all of these point to: https://[local ip]:[port] Anyone else with the same experience? How do you manage it?
  4. Is there any way to diagnose why qBittorrentVPN is spiking my CPU? It keeps jumping from 1% -> 40% -> 2% -> 30%, etc. As far as my Unraid Processor load shows, it kind of sit around 25-30%. Here's what my "docker stats" shows: 2024-03-15 11-34-27.mp4 This is with 0 active torrents. My CPU is: Intel Xeon E3-1230 v3 @ 3.30GHz (full specs in signature). Using qBittorrent v4.3.9, Unraid 6.12.6, docker is pinned to 2 cores, 4 threads. Using AirVPN as my VPN client, if relevant. I assume the issue is that I have 3126 torrents in my client. I actually had 5000 but just yesterday got them down. But the general higher-than-expected idle CPU load has been an issue for a few months now. I only now figured I should debug it. Nothing seemingly out of the ordinary in my qBittorrentVPN logs: Any insight would be appreciated! Or if it's clearly just my high torrent count, I'll accept that. I'll try to dwindle it down.
  5. That makes sense! The torrent client version should be approved, I'm on 4.3.9. But good idea to check if the VPN server itself is being blocked or rate limited of some kind. I'm going to try switching VPN servers. Thanks for the reply, I appreciate it!
  6. Lately I've been often having the issue of a lot of private trackers being stuck on the status "Updating..." New torrents are always affected by this and can take up to 30 minutes to start after adding. Completed torrents seem to be random in which have the tracker status stuck at "Updating..." and which are "Working". Public trackers work great, they instantly start downloading and seem to be extremely connectable. Ratio on those torrents can be in the hundreds after only a week or so. I double checked my forwarded port and everything seems to be okay. My best guess at this point is I have 5464 torrents in my client right now, pinned to 2 cores and 4 threads on my machine, and it's just being bottlenecked somehow. Could this be the case? Or any ideas what could possibly be causing the issue?
  7. I just started using a script with the cron schedule 0 2 * * * (every day at 2am). Looking at my Pushover logs it happened at 5am. My best guess is I recently moved from PST time to EST time and 2am PST would be 5am EST. Going to /Settings/DateTime on my Unraid dashboard does say I'm in Eastern Time, however. Is it possible the plugin is still stuck on the old timezone? Thank you!
  8. My server runs on a Super Micro X10SLL-F board which has dual 1 gbps Ethernet. I'm looking to upgrade my main Windows desktop to one with a motherboard that has 2.5 gbps Ethernet. I'm seeing some very affordable dual 2.5 gbps PCI-e network cards but I'm skeptical of their reliability. I saw TP-Link has a single 2.5 gbps network card (TX201) which is well reviewed. Maybe this is more of a question for the General Support forum, but is there a way to have my main computer only communicate with my server through that dedicated 2.5 gbps Ethernet port? If so, it seems like it could be nice to have a dedicated LAN connection while also continuing to have the WAN reliability of dual 1 gbps Ethernet directly on the motherboard.
  9. I've been noticing the docker has been stopping randomly. It's not too annoying since the rare chance I need to upload or retrieve documents I just spin it up. My best guess is it's the Redis or Paperless docker doing a weekly update that causes it, since they rely on each other. Does that sound right? Is there a way to fix this besides a User Script that just routinely checks if Paperless is running, if not, starts it? Here are the logs from the stopped container: [2023-04-02 04:00:00,450] [INFO] [celery.app.trace] Task paperless_mail.tasks.process_mail_accounts[50e61c75-e7f7-4e23-aff8-ee65d73cfca9] succeeded in 0.031346329022198915s: 'No new documents were added.' [2023-04-02 04:00:17,470] [INFO] [paperless.management.consumer] Received SIGINT, stopping inotify worker: Warm shutdown (MainProcess) [2023-04-02 04:00:18 -0700] [318] [INFO] Handling signal: term [2023-04-02 04:00:19 -0700] [318] [INFO] Shutting down: Master 2023-04-02 04:00:16,468 WARN received SIGTERM indicating exit request 2023-04-02 04:00:16,468 INFO waiting for gunicorn, celery, celery-beat, consumer to die 2023-04-02 04:00:17,820 INFO stopped: consumer (exit status 0) celery beat v5.2.7 (dawn-chorus) is starting. __ - ... __ - _ LocalTime -> 2023-04-01 08:06:56 Configuration -> . broker -> redis://192.168.1.200:6379// . loader -> celery.loaders.app.AppLoader . scheduler -> celery.beat.PersistentScheduler . db -> /usr/src/paperless/data/celerybeat-schedule.db . logfile -> [stderr]@%INFO . maxinterval -> 5.00 minutes (300s) 2023-04-02 04:00:18,106 INFO stopped: celery-beat (exit status 0) 2023-04-02 04:00:18,902 INFO stopped: celery (exit status 0) 2023-04-02 04:00:19,649 INFO stopped: gunicorn (exit status 0) Paperless-ngx docker container starting... Installing languages... Hit:1 http://deb.debian.org/debian bullseye InRelease Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB] Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB] Fetched 92.4 kB in 1s (158 kB/s) Reading package lists... Package tesseract-ocr-ara already installed! Creating directory /tmp/paperless Adjusting permissions of paperless files. This may take a while. Waiting for Redis... Redis ping #0 failed. Error: Error 111 connecting to 192.168.1.200:6379. Connection refused.. Waiting 5s Redis ping #1 failed. Error: Error 111 connecting to 192.168.1.200:6379. Connection refused.. Waiting 5s Redis ping #2 failed. Error: Error 111 connecting to 192.168.1.200:6379. Connection refused.. Waiting 5s Redis ping #3 failed. Error: Error 111 connecting to 192.168.1.200:6379. Connection refused.. Waiting 5s Redis ping #4 failed. Error: Error 111 connecting to 192.168.1.200:6379. Connection refused.. Waiting 5s Failed to connect to redis using environment variable PAPERLESS_REDIS. ** Press ANY KEY to close this window **
  10. I saw that, and yeah I ran short SMART tests with no error. Attributes all look fine except the excessive lbas written. I'm seeing a brand new replacement SSD would only be $65 so I'll probably just replace it anyway. But I am curious: Can an SSD be dying and not report any SMART/Attribute errors? Is it possible it's not dying and my btrfs pool just needs to be re-balanced or something? But I'm also not convinced 161 TBW is enough to kill a drive when I'm reading on Samsung's site "600 TBW for 1 TB model" (My cache is two Samsung 860 EVO 1TB).
  11. This is the second time in a few days that I've hit this error. Fix Common Problems will alert me that there's errors. I'll get two: "Your drive is either completely full or mounted read-only" but my drives are not full and something about my Docker.img being full but it's not. Both times my Docker service will fail and on my Docker tab I'll see: Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 712 Couldn't create socket: [111] Connection refused Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 898 Warning: stream_socket_client(): unable to connect to unix:///var/run/docker.sock (Connection refused) in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 712 Couldn't create socket: [111] Connection refused Warning: Invalid argument supplied for foreach() in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 967 I've attached the diagnostics after the 2nd time this error has happened. Also both times I try to stop the array and it gets stuck on "Retry unmounting user share(s)..." and no amount of trying to umount myself or find and kill any processes fixes it. The only thing I can do to un-stuck it is run "reboot" in console which gets detected as an unclean shutdown. My best guess is one of my cache drives is dying. One of them is an older one that has 161 TB written (347424289620 total lbas). The other is only around 30 TB. apollo-diagnostics-20230307-0858.zip
  12. Same to the above. A few hours ago my server completely crashed. Plex stopped working, the docker wouldn't start, and Fix Common Problems reported an error about a file system being corrupt or read only. The times I've seen that error were because docker.img was full or cache was full. Neither of those were the case. I tried stopping all my Docker containers to debug a possibly failing drive but Docker just crashed entirely. My array wouldn't stop because the drives were unmountable. It was stuck on a loop of trying to unmount disks. Docker containers had tasks running judging by lsof but I couldn't access anything Docker related. Ended up just running a reboot command which ended up being an unclean shutdown. Started my array in maintenance mode, checked all the drives, no errors. There were some plugin updates (I'm guessing the recent ones for this and Unassigned Devices) so I updated. Everything seems to be fine now except the error "Invalid folder addons contained within /mnt". This is what's there: All my plugins are up to date and I still have that error. Judging by previous posts here, specifically this one: ...I'm speculating the crash is related to the posts above or what the 03.03.2023 Unassigned Devices update fixed? I don't know what Rootfs is but something getting 100% full sounds like it would cause the corrupt file system or read only error that's usually from cache or docker.img being full. I've never seen a crash like this before. Hopefully this info helps. Currently everything seems to be working fine again. Sorry for the probably unrelated post.
  13. Minor issue, but apparently I hit ignore on this warning a long while ago but the "MONITOR WARNING / ERROR" button doesn't work. Clicking on it does nothing. It's been like this for maybe a year, just never bothered looking into fixing it yet.
  14. Hey, just throwing my hat in the ring to say that I also am getting the errors: Failed to fetch record! ***** Samba name server TIMEMACHINE is now a local master browser for workgroup WORKGROUP on subnet 192.168.1.2 ***** error in mds_init_ctx for: /opt/timemachine _mdssvc_open: Couldn't create policy handle for TimeMachine error in mds_init_ctx for: /opt/timemachine _mdssvc_open: Couldn't create policy handle for TimeMachine error in mds_init_ctx for: /opt/timemachine _mdssvc_open: Couldn't create policy handle for TimeMachine error in mds_init_ctx for: /opt/timemachine _mdssvc_open: Couldn't create policy handle for TimeMachine error in mds_init_ctx for: /opt/timemachine _mdssvc_open: Couldn't create policy handle for TimeMachine error in mds_init_ctx for: /opt/timemachine _mdssvc_open: Couldn't create policy handle for TimeMachine error in mds_init_ctx for: /opt/timemachine _mdssvc_open: Couldn't create policy handle for TimeMachine I managed to get a full backup and it seems to be working just fine. Might just be a bug or something, I'm not too concerned. --- GENERAL HELP FOR OTHERS Make sure your share name is the same name as the docker compose's "User Name". My share was called "time machine" but my username was the default "timemachine" which filled up my docker.img. It's often recommend to keep time machine backups to a single disk. I'm not sure if it applies here, but it doesn't hurt. The usual recommendation for "what size should my time machine backup be?" is 2 x your Mac's disk size. I have a 1TB disk, so I made it 2TB. Hope that helps!
  15. Hey! I have the same issue. I'm trying to break the habit of using Unbalance, but I've been using it for a while now and accumulated a lot of empty folders. Since running that, any issues? Just always want to be extra cautious about doing any scripted deleted commands.