sol

Members
  • Posts

    43
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sol's Achievements

Rookie

Rookie (2/14)

3

Reputation

1

Community Answers

  1. Mullvad killed port forwarding this month. Didn't know about it until today when things didn't seem to be working right. Privoxy stopped functioning, for some reason, when port forwarding was removed? They said they were removing it July 1 but things didn't break until today. I redownloaded the wireguard config and everything seems to be running again. How bad is not having port forwarding going to screw me? Is it mostly torrent speeds?
  2. NEVERMIND. Fixed. Bazarr doesn't allow any special ways of excluding IPs for privoxy. You have to list each IP separately. No wildcards, no CIDR. I'm an idiot and can't read. I'm having this same issue. Bazarr has stopped working because it can't connect to Sonarr or Radarr even though I've told it to exclude those IPs. When I test Bazarr's communication to Sonarr or Radarr it times out on port 8118 which is Privoxy. If I turn off Privoxy in Bazarr it works fine. I like having all my arr traffic going through my VPN but it just doesn't seem to want to work for Bazarr. Are there more ways to test Privoxy?
  3. New email from Google. Anyone else get this? Anyone figured out how to proceed? (edited out some identifying content) Our records indicate you have OAuth clients that used the OAuth OOB flow in the past. Hello Google OAuth Developer, We are writing to inform you that OAuth out-of-band (OOB) flow will be deprecated on October 3, 2022, to protect users from phishing and app impersonation attacks. What do I need to know? Starting October 3, 2022, we will block OOB requests to Google’s OAuth 2.0 authorization endpoint for existing clients. Apps using OOB in testing mode will not be affected. However, we strongly recommend you to migrate them to safer methods as these apps will be immediately blocked when switching to in production status. Note: New OOB usage has already been disallowed since February 28, 2022. Below are key dates for compliance September 5, 2022: A user-facing warning message may be displayed to non-compliant OAuth requests October 3, 2022: The OOB flow is blocked for all clients and users will see the error page. Please check out our recent blog post about Making Google OAuth interactions safer for more information. What do I need to do? Migrate your app(s) to an appropriate alternative method by following these instructions: Determine your app(s) client type from your Google Cloud project by following the client links below. Migrate your app(s) to a more secure alternative method by following the instructions in the blog post above for your client type. If necessary, you may request a one-time extension for migrating your app until January 31, 2023. Keep in mind that all OOB authorization requests will be blocked on February 1, 2023. The following OAuth client(s) will be blocked on Oct 3, 2022. OAuth client list: Project ID: rcloneclientid-247*** Client: 211984046708-hahav9pt2t2v6mc6*********apps.googleusercontent.com Thanks for choosing Google OAuth. — The Google OAuth Developer Team
  4. Replaced cache drive and formatted xfs. Moved everything back and rebuilt docker containers. Up nine days with no issues. No full log, no read only cache. Obviously some kind of cache drive issue even though it tested good on extended smart. Likely btrfs issue that couldn't be easily solved with a solo drive. btrfs likely should not be the default format for solo cache drives in unraid. Solved
  5. Yeah, I am going to re-format to XFS and check the drive health. Maybe replace the drive while I'm at it.
  6. The container throwing errors at 2:08 in the above docker.log.1 is Plex. Which makes sense as it is one of the few containers that do weekly tasks. Like; Optimize database every week Remove old bundles every week Remove old cache files every week Refresh local metadata every three days Update all libraries during maintenance Upgrade media analysis during maintenance Refresh library metadata periodically Perform extensive media analysis during maintenance Fetch missing location names for items in photo sections Analyze and tag photos
  7. ime="2022-02-17T09:39:13.452768189-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/11042cc115649e3ab6 time="2022-02-17T09:39:14.623462556-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b561fa76d6448f3866 time="2022-02-17T09:39:22.156585651-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2797cb4cbee2465f19 time="2022-02-17T09:39:29.803429402-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/46ed75f24b478dac09 time="2022-02-17T09:39:42.249379513-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5d9bc873b3912de1d8 time="2022-02-17T09:41:56.087910866-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/813c029f1297badc0d time="2022-02-17T13:17:05.810879095-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/11042cc115649e3ab6 time="2022-02-17T16:02:40.611782191-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1e12f74ffa47552535 time="2022-02-17T16:10:26.454018823-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2917c4264a8d7fa646 time="2022-02-17T16:12:57.570313462-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6e5057e30f675ad9e1 time="2022-02-17T19:42:31.947233453-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/029848900a6b20de1f time="2022-02-20T03:02:27.631077081-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/074683c5cf36bad42a time="2022-02-20T03:02:27.869498318-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e7fbdfeaf2938bf844 time="2022-02-20T03:02:28.093142878-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e971c0d73df2f126ce time="2022-02-20T03:02:28.759846471-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5fde6164eee377132e time="2022-02-20T18:15:32.855833192-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e7fbdfeaf2938bf844 time="2022-02-20T18:15:35.908244972-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e971c0d73df2f126ce time="2022-02-20T18:21:25.657697206-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/361f9711b624934400 time="2022-02-20T18:30:55.243448330-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/623879e0f6a156ec9f time="2022-02-20T18:31:30.396987709-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/623879e0f6a156ec9f time="2022-02-20T18:31:33.436374626-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/361f9711b624934400 time="2022-02-21T05:16:53.142672669-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/839d03928f88bace48 time="2022-02-21T05:16:53.338638453-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/46b04ec6c0c08c67c0 time="2022-02-21T05:16:53.483946879-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c683260bf2fce1850a time="2022-02-21T05:16:53.747645779-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/623879e0f6a156ec9f time="2022-02-21T05:16:53.974481072-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/361f9711b624934400 time="2022-02-21T05:16:54.837505517-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/074683c5cf36bad42a time="2022-02-21T05:16:55.094084245-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b561fa76d6448f3866 time="2022-02-21T05:16:55.635014188-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5fde6164eee377132e time="2022-02-21T05:16:55.880927303-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6e5057e30f675ad9e1 time="2022-02-21T05:16:56.624917396-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/46ed75f24b478dac09 time="2022-02-21T05:16:57.072193293-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4bf9a4eefa4506e462 time="2022-02-21T05:16:57.458395737-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/813c029f1297badc0d time="2022-02-21T05:16:57.846209906-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5d9bc873b3912de1d8 time="2022-02-23T08:04:16.299433190-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cfac2a914f66fac185 time="2022-02-23T08:04:32.226963630-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e13546ef3b1fc375f time="2022-02-23T08:04:45.446979587-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/66012d868342c76e6f time="2022-02-23T08:05:28.088544708-06:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ade6eb0470838d3937 time="2022-02-24T02:08:39.381326314-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:39.388577844-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:39.388614598-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:39.388655682-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:40.382377322-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:40.389256094-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:40.389308017-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:40.389369289-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:41.340254415-06:00" level=error msg="Error replicating health state for container 6e5057e30f675ad9e12892bdab82a35ccbf6c79520f015c145eb219607b9392f: open /var/lib/doc time="2022-02-24T02:08:41.383229258-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:41.389602197-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68e time="2022-02-24T02:08:41.389639522-06:00" level=error msg="Failed to log msg \"\" for logger json-file: error writing log entry: write /var/lib/docker/containers/623879e0f6a156ec9f5f2e68 The beginning of the errors of docker.log.1 are at the bottom here.
  8. root@tmedia:~# df -h /var/log Filesystem Size Used Avail Use% Mounted on tmpfs 384M 384M 0 100% /var/log root@tmedia:~# du -sm /var/log/* 1 /var/log/apcupsd.events 1 /var/log/apcupsd.events.1 0 /var/log/btmp 0 /var/log/cron 0 /var/log/debug 1 /var/log/dmesg 0 /var/log/docker.log 383 /var/log/docker.log.1 0 /var/log/faillog 1 /var/log/gitflash 0 /var/log/lastlog 0 /var/log/libvirt 1 /var/log/maillog 0 /var/log/messages 0 /var/log/nfsd 1 /var/log/nginx 0 /var/log/packages 1 /var/log/pkgtools 0 /var/log/plugins 0 /var/log/pwfail 0 /var/log/removed_packages 0 /var/log/removed_scripts 0 /var/log/removed_uninstall_scripts 1 /var/log/samba 0 /var/log/scripts 0 /var/log/secure 0 /var/log/setup 0 /var/log/spooler 0 /var/log/swtpm 1 /var/log/syslog 2 /var/log/syslog.1 0 /var/log/unraid-api 0 /var/log/vfio-pci 1 /var/log/wtmp
  9. Is PIA still the darling for a port-fowarding VPN?
  10. Every Thursday morning for a few months now I wake up to a banner on my docker page (haven't screenshot it sorry) with no icons or access to any docker containers. A reboot seems to fix it. Unraid typically just "runs" for me and has for years, so I wasn't sweating it. Today I had some time to dig around a bit. I can't find any processes/jobs that are running on a Wednesday night/Thrusday morning (There HAS to be something....) but I did find that there were a bunch of btrfs errors in the log. I only have btrfs on my cache drive which is where the docker folder is stored of course. I restarted the system and ran a memtest with no errors reported. I then started the array in maintenance mode and ran a btrfs check on the cache drive and got this, but I don't have any idea what to do from here; [1/7] checking root items [2/7] checking extents data backref 40881405952 root 5 owner 46973646 offset 0 num_refs 0 not found in extent tree incorrect local backref count on 40881405952 root 5 owner 46973646 offset 0 found 1 wanted 0 back 0x1ae7ab40 incorrect local backref count on 40881405952 root 5 owner 46973646 offset 32768 found 0 wanted 1 back 0x183e3cb0 backref disk bytenr does not match extent record, bytenr=40881405952, ref bytenr=0 backpointer mismatch on [40881405952 45056] ref mismatch on [48950460416 8192] extent item 549755813888, found 0 owner ref check failed [48950460416 8192] ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space tree [4/7] checking fs roots root 5 inode 74013114 errors 200, dir isize wrong root 5 inode 77896360 errors 1, no inode item unresolved ref dir 74013114 index 3862251 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896362 errors 1, no inode item unresolved ref dir 74013114 index 3862253 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896370 errors 1, no inode item unresolved ref dir 74013114 index 3862255 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896373 errors 1, no inode item unresolved ref dir 74013114 index 3862257 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896374 errors 1, no inode item unresolved ref dir 74013114 index 3862259 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896379 errors 1, no inode item unresolved ref dir 74013114 index 3862261 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896382 errors 1, no inode item unresolved ref dir 74013114 index 3862263 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896383 errors 1, no inode item unresolved ref dir 74013114 index 3862265 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896384 errors 1, no inode item unresolved ref dir 74013114 index 3862267 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896385 errors 1, no inode item unresolved ref dir 74013114 index 3862269 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896388 errors 1, no inode item unresolved ref dir 74013114 index 3862271 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896391 errors 1, no inode item unresolved ref dir 74013114 index 3862273 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896392 errors 1, no inode item unresolved ref dir 74013114 index 3862275 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896393 errors 1, no inode item unresolved ref dir 74013114 index 3862277 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896394 errors 1, no inode item unresolved ref dir 74013114 index 3862279 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896397 errors 1, no inode item unresolved ref dir 74013114 index 3862281 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896400 errors 1, no inode item unresolved ref dir 74013114 index 3862283 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896401 errors 1, no inode item unresolved ref dir 74013114 index 3862285 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896402 errors 1, no inode item unresolved ref dir 74013114 index 3862287 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref root 5 inode 77896403 errors 1, no inode item unresolved ref dir 74013114 index 3862289 namelen 45 name 3d18b2ea89fa5357cf6423b4e6985b46fa10195e.json filetype 1 errors 5, no dir item, no inode ref ERROR: errors found in fs roots Opening filesystem to check... Checking filesystem on /dev/sdf1 UUID: 482f734c-0b01-4faf-8bc0-86c08db6bd62 cache and super generation don't match, space cache will be invalidated found 137627197440 bytes used, error(s) found total csum bytes: 131599768 total tree bytes: 2780954624 total fs tree bytes: 2420736000 total extent tree bytes: 176078848 btree space waste bytes: 704096014 file data blocks allocated: 190648750080 referenced 137950638080
  11. I have a CyberPower CP1500PFCLCD, very similar to yours, with the same issue. Log fills up with; Dec 23 12:38:43 tmedia usbhid-ups[4122]: nut_libusb_get_report: Input/Output Error. Dec 23 12:40:53 tmedia usbhid-ups[4122]: nut_libusb_get_report: Input/Output Error. Dec 23 12:42:17 tmedia usbhid-ups[4122]: nut_libusb_get_report: Input/Output Error. Dec 23 12:43:35 tmedia usbhid-ups[4122]: nut_libusb_get_report: Input/Output Error. etc etc. Have you figured out how to stop it?
  12. I guess the question now is, be pro-active and try to get signed up with one user at $20/month unlimited and eat the paltry $8 increase for a few extra months? Or, let them transition me sometime next year and see what they sign me up for. Probably a terrible idea to leave it in their hands. I'll likely wait until they start warning me with an actual conversion date before I try to switch.
  13. Looks like we are getting down to the wire on Google Workspace transition. Getting the email below now. Any recommendations/thoughts? One user, just over 7TB (growing slowly). Hello Administrator, We previously notified you that your G Suite subscription will transition to a Google Workspace subscription. We’re writing to let you know that you can now begin your transition. There are two options: Option 1 (recommended): Self-transition now in a few easy steps. Option 2: Let Google transition you automatically once your organization is eligible*, starting from January 31, 2022. We will provide you with at least 30 days notice before your transition date. (There's more but relatively un-important)
  14. Looks like it is some kind of issue with ca-montreal. Changed to ca-ontario and speeds and logs look normal. Thanks for your kind attention. It gives me the confidence to dive in and tinker.