sol

Members
  • Posts

    31
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sol's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. I guess the question now is, be pro-active and try to get signed up with one user at $20/month unlimited and eat the paltry $8 increase for a few extra months? Or, let them transition me sometime next year and see what they sign me up for. Probably a terrible idea to leave it in their hands. I'll likely wait until they start warning me with an actual conversion date before I try to switch.
  2. Looks like we are getting down to the wire on Google Workspace transition. Getting the email below now. Any recommendations/thoughts? One user, just over 7TB (growing slowly). Hello Administrator, We previously notified you that your G Suite subscription will transition to a Google Workspace subscription. We’re writing to let you know that you can now begin your transition. There are two options: Option 1 (recommended): Self-transition now in a few easy steps. Option 2: Let Google transition you automatically once your organization is eligible*, starting from January 31, 2022. We will provide you with at least 30 days notice before your transition date. (There's more but relatively un-important)
  3. Looks like it is some kind of issue with ca-montreal. Changed to ca-ontario and speeds and logs look normal. Thanks for your kind attention. It gives me the confidence to dive in and tinker.
  4. I restarted mine and got these interesting results; 2021-11-08 10:24:18,692 DEBG 'start-script' stdout output: [warn] PIA VPN info API currently down, skipping endpoint port forward check 2021-11-08 10:24:50,767 DEBG 'start-script' stdout output: [warn] Unable to successfully download PIA json to generate token from URL 'https://privateinternetaccess.com/gtoken/generateToken' [info] 12 retries left [info] Retrying in 10 secs...
  5. Did PIA change it's port forwarded servers again? I'm getting all KB speeds this morning. My supervisord.log only shows; [info] qatar.privacy.network [info] saudiarabia.privacy.network [info] sg.privacy.network [info] srilanka.privacy.network [info] taiwan.privacy.network [info] tr.privacy.network [info] ae.privacy.network [info] vietnam.privacy.network [info] aus-melbourne.privacy.network [info] au-sydney.privacy.network [info] aus-perth.privacy.network [info] nz.privacy.network [info] dz.privacy.network [info] egypt.privacy.network [info] morocco.privacy.network [info] nigeria.privacy.network [info] za.privacy.network None of which I currently have configured.
  6. UPDATE: I figured this out after about four hours of re-teaching myself lol. Something odd happened in Google Workspace. App Access Control (api) was untrusted. I re-enabled it and then had to run rclone config as headless and use my Workspace admin account to get the token and update it. I screwed it up the first time by using my main(old) regular google(gmail) account and could see my personal google drive in rclone lol. Using the admin account for Workspace fixed that. I really appreciate this forum. It gives me the confidence to poke around! I figured I was fine as long as I keep copies of the encryption passwords for the crypt portion and, sure enough, I eventually got it. Lost my mount three days ago apparently and it looks like the token has expired. From the mount script log; couldn't fetch token - maybe it has expired? - refresh with "rclone config reconnect gdrive{UpdQG}:": oauth2: cannot fetch token: 400 Bad Request Response: { "error": "invalid_grant", "error_description": "Token has been expired or revoked." } The "rclone config reconnect" command in the log doesn't work, I get; Error: backend doesn't support reconnect or authorize Usage: rclone config reconnect remote: [flags] Flags: -h, --help help for reconnect Use "rclone [command] --help" for more information about a command. Use "rclone help flags" for to see the global flags. Use "rclone help backends" for a list of supported services. 2021/08/16 23:40:29 Fatal error: backend doesn't support reconnect or authorize Going to need some detailed help. I set this up a few years ago and it's been cruising along on its own just fine until now. Thanks.
  7. Home page has a display error of some kind. History and graph tabs look fine. Image is attached of what it looks like. I found one error in the logs; 2021-05-19 19:33:35ERROR[19/May/2021:19:33:35] HTTP Traceback (most recent call last): File "/app/tautulli/lib/cherrypy/_cprequest.py", line 630, in respond self._do_respond(path_info) File "/app/tautulli/lib/cherrypy/_cprequest.py", line 689, in _do_respond response.body = self.handler() File "/app/tautulli/lib/cherrypy/lib/encoding.py", line 221, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/app/tautulli/lib/cherrypy/_cpdispatch.py", line 54, in __call__ return self.callable(*self.args, **self.kwargs) File "/app/tautulli/plexpy/webserve.py", line 399, in home_stats stats_count=stats_count) File "/app/tautulli/plexpy/datafactory.py", line 314, in get_home_stats timestamp = int((datetime.now(tz=plexpy.SYS_TIMEZONE) - timedelta(days=time_range)).timestamp()) AttributeError: 'datetime.datetime' object has no attribute 'timestamp' Any idea how to fix? Or just wait for update... Thanks in advance.
  8. NEVERMIND FIXED: After discovering there were no config files for deluge-vpn I disabled and re-enabled docker. Config files showed up, copied openvpn files as per usual and I'm back up and running. Leaving this post for others. Had some kind of event at 3am that killed my dockers. Could have been a power outage I guess, I have a UPS but it has never worked very well with unraid. I'm not even sure that was it as the server was powered on and it shouldn't have been if there was an outage. UPDATE: 3am on Sunday is when my dockers auto-update. Regardless, all of my dockers came back up except binhex deluge-vpn. When I try to start it I get Execution Error Server Error window. When I check the server logs the only thing that shows up is BELOW. I have had my server set up for IPV4 only in network settings for years. Supervisord.log hasn't been touched since 3am. UPDATE: I got impatient and deleted the docker and re-installed. On startup I got; /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint binhex-delugevpn (af91521bd4e05570c2288cc4ccc838cbe668d260d8971415f2e7ecf929226404): Bind for 0.0.0.0:58946 failed: port is already allocated. The port is not allocated to any other docker though. UPDATE: Now there are no config files written at all. Dec 21 09:23:33 tmedia kernel: IPv6: ADDRCONF(NETDEV_UP): veth37e079a: link is not ready Dec 21 09:23:33 tmedia kernel: docker0: port 8(veth37e079a) entered blocking state Dec 21 09:23:33 tmedia kernel: docker0: port 8(veth37e079a) entered forwarding state Dec 21 09:23:33 tmedia kernel: docker0: port 8(veth37e079a) entered disabled state Dec 21 09:23:33 tmedia kernel: docker0: port 8(veth37e079a) entered disabled state Dec 21 09:23:33 tmedia kernel: device veth37e079a left promiscuous mode Dec 21 09:23:33 tmedia kernel: docker0: port 8(veth37e079a) entered disabled state Dec 21 09:29:18 tmedia kernel: docker0: port 8(vethd741bb1) entered blocking state Dec 21 09:29:18 tmedia kernel: docker0: port 8(vethd741bb1) entered disabled state Dec 21 09:29:18 tmedia kernel: device vethd741bb1 entered promiscuous mode Dec 21 09:29:18 tmedia kernel: IPv6: ADDRCONF(NETDEV_UP): vethd741bb1: link is not ready Dec 21 09:29:18 tmedia kernel: docker0: port 8(vethd741bb1) entered blocking state Dec 21 09:29:18 tmedia kernel: docker0: port 8(vethd741bb1) entered forwarding state Dec 21 09:29:18 tmedia kernel: docker0: port 8(vethd741bb1) entered disabled state Dec 21 09:29:18 tmedia kernel: docker0: port 8(vethd741bb1) entered disabled state Dec 21 09:29:18 tmedia kernel: device vethd741bb1 left promiscuous mode Dec 21 09:29:18 tmedia kernel: docker0: port 8(vethd741bb1) entered disabled state Dec 21 09:32:30 tmedia kernel: docker0: port 8(veth63d7cc7) entered blocking state Dec 21 09:32:30 tmedia kernel: docker0: port 8(veth63d7cc7) entered disabled state Dec 21 09:32:30 tmedia kernel: device veth63d7cc7 entered promiscuous mode Dec 21 09:32:30 tmedia kernel: IPv6: ADDRCONF(NETDEV_UP): veth63d7cc7: link is not ready Dec 21 09:32:30 tmedia kernel: docker0: port 8(veth63d7cc7) entered blocking state Dec 21 09:32:30 tmedia kernel: docker0: port 8(veth63d7cc7) entered forwarding state Dec 21 09:32:30 tmedia kernel: docker0: port 8(veth63d7cc7) entered disabled state Dec 21 09:32:30 tmedia kernel: docker0: port 8(veth63d7cc7) entered disabled state Dec 21 09:32:30 tmedia kernel: device veth63d7cc7 left promiscuous mode Dec 21 09:32:30 tmedia kernel: docker0: port 8(veth63d7cc7) entered disabled state Dec 21 09:37:49 tmedia kernel: docker0: port 8(vethdc3cb46) entered blocking state Dec 21 09:37:49 tmedia kernel: docker0: port 8(vethdc3cb46) entered disabled state Dec 21 09:37:49 tmedia kernel: device vethdc3cb46 entered promiscuous mode Dec 21 09:37:49 tmedia kernel: IPv6: ADDRCONF(NETDEV_UP): vethdc3cb46: link is not ready Dec 21 09:37:49 tmedia kernel: docker0: port 8(vethdc3cb46) entered blocking state Dec 21 09:37:49 tmedia kernel: docker0: port 8(vethdc3cb46) entered forwarding state Dec 21 09:37:49 tmedia kernel: docker0: port 8(vethdc3cb46) entered disabled state Dec 21 09:37:50 tmedia kernel: docker0: port 8(vethdc3cb46) entered disabled state Dec 21 09:37:50 tmedia kernel: device vethdc3cb46 left promiscuous mode Dec 21 09:37:50 tmedia kernel: docker0: port 8(vethdc3cb46) entered disabled state
  9. I really appreciate your responses. Thanks for the assist!
  10. I got impatient just now and started trying things. I removed the movies folder in union with rmdir and it deleted, so it didn't have anything in it or it would have warned me. I recreated the movies folder and tried to run the mount script. Same error. I removed the movies folder from union and ran the mount script. No error!! I looked in union and nothing was there. I added the moves folder back and everything in rclone reappeared. It looks like it's fixed for now, but I don't know how it got broken and why the script wouldn't fix it as it's been running fine.
  11. /mnt/user/mount_rclone/google_vfs ? No, that's not empty, but unless I'm confused it shouldn't be. It's showing everything that's in my google drive.
  12. Terminal. There is nothing there but the movies directory and it's empty.
  13. I posted this in [Plugin] rclone but crossposting here because I got most of my setup from this guide. Everything has been working great with rclone since I set it up about a month ago. This weekend though, I've lost the unionfs mount. I've shutdown unraid and rebooted and it doesn't seem to want to come back. Manually running ( in background) my rclone_unmount script and then running my rclone_mount script ( in background) always yields the same error in the log. 18.08.2019 08:50:01 INFO: Check rclone vfs already mounted. fuse: mountpoint is not empty fuse: if you are sure this is safe, use the 'nonempty' mount option 18.08.2019 08:50:01 CRITICAL: unionfs Remount failed. Script Finished Sun, 18 Aug 2019 08:50:01 -0500 my mount mount_unionfs isn't empty as I have a movies directory there, the movies directory is empty though. Should I just add the nonempty mount option or is there a different best practice or is something else going on? Any help is appreciated. 
  14. SOLVED: unionfs has to be empty, including directories, when mounting. Everything has been working great with rclone since I set it up about a month ago. This weekend though, I've lost the unionfs mount. I've shutdown unraid and rebooted and it doesn't seem to want to come back. Manually running ( in background) my rclone_unmount script and then running my rclone_mount script ( in background) always yields the same error in the log. 18.08.2019 08:50:01 INFO: Check rclone vfs already mounted. fuse: mountpoint is not empty fuse: if you are sure this is safe, use the 'nonempty' mount option 18.08.2019 08:50:01 CRITICAL: unionfs Remount failed. Script Finished Sun, 18 Aug 2019 08:50:01 -0500 my mount mount_unionfs isn't empty as I have a movies directory there, the movies directory is empty though. Should I just add the nonempty mount option or is there a different best practice or is something else going on? Any help is appreciated.
  15. Had an out of control Radarr docker tonight, three instances of mono at 164%+ cpu. First is the first few lines of top then second is the ps of the mono processes. Anything I should investigate? Only been running Radarr for about a week after getting sick of Couchpotato. The docker page eventually responded and I stopped Radarr and everything has gone back to normal, for now. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 833 nobody 20 0 2512020 849136 208 R 164.9 5.3 12929:11 mono 26250 nobody 20 0 3116308 1.3g 216 R 164.9 8.4 14097:29 mono 8833 nobody 20 0 2301536 206876 56 S 164.2 1.3 14027:21 mono 4305 root 0 -20 0 0 0 D 9.9 0.0 6:47.54 loop2 700 root 20 0 0 0 0 S 6.3 0.0 28:37.32 kswapd0 11003 root 20 0 229556 23348 18896 S 2.3 0.1 0:00.07 php 21251 root 20 0 0 0 0 I 2.3 0.0 0:03.55 kworker/u16:8-btrfs-endio 2375 root 20 0 0 0 0 I 1.7 0.0 0:03.11 kworker/u16:1-btrfs-endio nobody 833 150 5.2 2512020 849136 ? Sl Apr13 12929:32 /usr/sbin/mono --debug /usr/lib/radarr/Radarr.exe /data=/config /nobrowser /restart nobody 880 0.4 3.8 3259644 624176 ? Ssl Apr14 40:02 mono --debug NzbDrone.exe -nobrowser -data=/config nobody 8833 162 1.2 2301536 206876 ? Rl Apr13 14027:43 /usr/sbin/mono --debug /usr/lib/radarr/Radarr.exe /data=/config /nobrowser /restart root 11016 0.0 0.0 5712 2040 pts/0 S+ 20:06 0:00 grep mono nobody 18319 0.5 3.2 2769340 522740 ? Dl Apr14 44:03 /usr/sbin/mono --debug /usr/lib/radarr/Radarr.exe /data=/config /nobrowser /restart nobody 26250 160 8.4 3116308 1359656 ? Rl Apr13 14097:50 /usr/bin/mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config