Spatial Disorder

Members
  • Posts

    31
  • Joined

  • Last visited

Posts posted by Spatial Disorder

  1. 1 hour ago, RichJacot said:

    Traceback (most recent call last):
      File "/usr/lib/sabnzbd/venv/lib/python3.10/site-packages/cheroot/server.py", line 1807, in serve
        self._connections.run(self.expiration_interval)
      File "/usr/lib/sabnzbd/venv/lib/python3.10/site-packages/cheroot/connections.py", line 198, in run
        self._run(expiration_interval)
      File "/usr/lib/sabnzbd/venv/lib/python3.10/site-packages/cheroot/connections.py", line 241, in _run
        new_conn = self._from_server_socket(self.server.socket)
      File "/usr/lib/sabnzbd/venv/lib/python3.10/site-packages/cheroot/connections.py", line 295, in _from_server_socket
        s, ssl_env = self.server.ssl_adapter.wrap(s)
      File "/usr/lib/sabnzbd/venv/lib/python3.10/site-packages/cheroot/ssl/builtin.py", line 270, in wrap
        s = self.context.wrap_socket(
      File "/usr/lib/python3.10/ssl.py", line 513, in wrap_socket
        return self.sslsocket_class._create(
      File "/usr/lib/python3.10/ssl.py", line 1071, in _create
        self.do_handshake()
      File "/usr/lib/python3.10/ssl.py", line 1342, in do_handshake
        self._sslobj.do_handshake()
    ssl.SSLZeroReturnError: TLS/SSL connection has been closed (EOF) (_ssl.c:997)

     

    I manually added a download through radarr and it 'appears' to function as expected.  Of course we'd like to not see errors all the time though.  ;-)

    Also seeing this error...rolled back to `3.7.0-1-01` and all is well...

  2. 1 hour ago, Jorgen said:

    Looking very good here. Successfully acquired a port within 21 seconds of starting the new container. As far as I can see it only took one try, no re-tries. But then again I don't have debug logs on so not sure if there-tries would show?

     

    Same here...pulled down the new test update and acquired port right away on the first attempt. Everything is looking good so far.

    • Like 1
  3. 21 minutes ago, UnBeastRaid said:

    I have successfully connected to Montreal via PIA. Thanks @Binhex!!

     

    One question: How do I know what my VPN IP is? I remember before that I saw my IP in the lower right hand corner of the Deluge GUI.

    It now says "N/A" after success with VPN connection.

     

    - Thanks

    My external ip is showing accurately in the bottom right. I didn't pay attention when I first connected...so I'm not sure if it refreshes at some interval or not?

  4. 5 hours ago, DZMM said:

    It's the logical next step.  I've ditched my parity drive (I backup to gdrive using duplicati), sold all but 2 of my HDDs that store seeds, pending uploads and my work/personal documents.  I don't really use the mass storage functionality anymore other than pooling the 2 HDDs - kinda impossible and would be mega expensive to store 0.5PB+ of content.....

     

    My unRAID server main purpose is to power VMs (3xW10 VMs for me and the kids + pfsense VM) and Dockers (plex server with remote storage, Home Assistant, unifi, minecraft server, nextcloud, radarr etc). 

    Wow...0.5PB...that's pretty impressive. Any concerns with monthly bandwidth utilization? No issues from your ISP?

     

    I've also been using duplicati for the last few years, been very happy with it overall. Do you do anything different with your offsite mount to ensure you could recover in the event of...say an accidental deletion?

     

     

  5. 4 hours ago, DZMM said:

    If you want some partitioning, you could do /mergerfs --> /mnt/user/gdrive_mergerfs and then within your dockers use the following paths:

     

    
    /mergerfs/downloads for /mnt/user/gdrive_mergerfs/downloads/  
    /mergerfs/media/tv for /mnt/user/gdrive_mergerfs/tv/  

     

     

    The trick is your torrent, radarr, sonarr etc dockers have to be moving files around the mergerfs mount i.e. /mergerfs.

     

    If you map:

    
    /mergerfs --> /mnt/user/gdrive_mergerfs
    /downloads --> /mnt/user/gdrive_mergerfs/downloads
    /downloads_local (adding for another example) --> /mnt/user/gdrive_local/downloads

     

    when you ask the docker to hardlink a file from /downloads or /downloads_local to /mergerfs it won't work.  It has to be from /mergerfs/downloads to /mergerfs/media/tv - within /mergerfs.

     

    To be clear, when I say I do /user --> /mnt/user it's because it just makes my life easier when I'm setting up all dockers to talk to each other (I'm lazy) - within my media dockers I still only use paths within /user/mount_mergerfs e.g.  /user/mount_mergerfs/downloads and /user/mount_mergerfs/tv_shows

    Thanks, that makes sense. The more I think about it the more I'm leaning toward going all in on this in order to simplify everything. Right now I have a mix of data local and in gdrive. I'm with you on being lazy...I work in IT and the older I get the less I want to mess with certain aspects of it...just want reliability. I really only have one share that would even be a concern...and now that I think about it...it should probably live in an encrypted vault...

  6. 18 hours ago, DZMM said:

    If want full hardlink support map all docker paths to /user --> /mnt/user then within the docker set all locations to a sub path of /user/mount-mergerfs.   Then behind the scenes unraid and rclone will behave as normal and manage where the files really reside

    First @DZMM , thanks for sharing this and thank to everyone that has contributed to make it better. I've been using for a few months now with mostly no issues (had that odd orphaned image issue a while back that a few of us had with mergerfs). That was really the only hiccup. I'm really considering moving everything offsite...

     

    As for hardlinks, this is timely, as I've finally decided to get around to making hardlinks and get seeding to work properly. When i originally setup things I had:

    /mnt/user/media <with sub directories for tv/movies/music/audiobooks

    /mnt/user/downloads <with sub directories for complete/incomplete

     

    Your script came along and I then added:

    /mnt/user/gdrive_local

    /mnt/user/gdrive_mergerfs

    /mnt/user/gdrive_rclone

     

    I know just mapping all containers to /mnt/user would solve this...but I'm a little apprehensive about giving all the necessary applications read/write to the entire array. I don't have any of this data going to cache...so is there anything stopping me, or a good reason not to, stuff everything into /mnt/user/media and then map everything to that? 

     

  7. @DZMM and @teh0wner Thanks for the help, I'm back up and running and have moved to the latest working scripts with no issues the last 24 hours. I'm fairly certain the new scripts were failing due to the latest mergerfs not getting pulled. Just for good measure I:

    1. Deleted all shares (local, rclone mount, mergerfs mount)
    2. Ran docker rmi [imageID] to get rid of the old mergerfs image
    3. After some reading up on rclone vs rclone-beta, I reverted back to rclone (I don't think this was the issue, but for this purpose would rather be on a stable release and I see nothing that needs the new stuff in the beta release). I'm sure it's fine either way.
    4. Pulled latest github scripts and modified variables for my setup
    5. Clean reboot for good measure

    Thanks for all the work putting this together 🤘

     

    • Like 1
  8. 1 hour ago, DZMM said:

    @teh0wner can you post your chosen mount options as I think you've got something wrong in there

     

    @Spatial Disorder have you tried installing mergerfs again since the change?

    @DZMM I blew away all shares and appdata, rebooted, then manually ran

    mkdir -p /mnt/user/appdata/other/rclone/mergerfs
    docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
    mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin

    I then ran the latest gdrive_rclone_mount script and it runs successfully (shown below). As soon as I even try to list contents of mount_mergerfs the terminal freezes and it pegs a core on the CPU and never responds. I'm not sure what to try...

    Script location: /tmp/user.scripts/tmpScripts/gdrive_mount/script
    Note that closing this window will abort the execution of this script
    01.03.2020 12:50:26 INFO: *** Starting mount of remote gdrive_media_vfs
    01.03.2020 12:50:26 INFO: Checking if this script is already running.
    01.03.2020 12:50:26 INFO: Script not running - proceeding.
    01.03.2020 12:50:26 INFO: Mount not running. Will now mount gdrive_media_vfs remote.
    01.03.2020 12:50:26 INFO: Recreating mountcheck file for gdrive_media_vfs remote.
    2020/03/01 12:50:26 DEBUG : rclone: Version "v1.51.0-076-g38a4d50e-beta" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "copy" "mountcheck" "gdrive_media_vfs:" "-vv" "--no-traverse"]
    2020/03/01 12:50:26 DEBUG : Using config file from "/boot/config/plugins/rclone-beta/.rclone.conf"
    2020/03/01 12:50:27 DEBUG : mountcheck: Modification times differ by -12h40m10.128729289s: 2020-03-01 12:50:26.567729289 -0500 EST, 2020-03-01 05:10:16.439 +0000 UTC
    2020/03/01 12:50:29 INFO : mountcheck: Copied (replaced existing)
    2020/03/01 12:50:29 INFO :
    Transferred: 32 / 32 Bytes, 100%, 20 Bytes/s, ETA 0s
    Transferred: 1 / 1, 100%
    Elapsed time: 1.5s
    
    2020/03/01 12:50:29 DEBUG : 7 go routines active
    01.03.2020 12:50:29 INFO: *** Creating mount for remote gdrive_media_vfs
    01.03.2020 12:50:29 INFO: sleeping for 5 seconds
    01.03.2020 12:50:34 INFO: continuing...
    01.03.2020 12:50:34 INFO: Successful mount of gdrive_media_vfs mount.
    01.03.2020 12:50:34 INFO: Mergerfs already installed, proceeding to create mergerfs mount
    01.03.2020 12:50:34 INFO: Creating gdrive_media_vfs mergerfs mount.
    01.03.2020 12:50:34 INFO: Checking if gdrive_media_vfs mergerfs mount created.
    01.03.2020 12:50:34 INFO: Check successful, gdrive_media_vfs mergerfs mount created.
    01.03.2020 12:50:34 INFO: Starting dockers.
    "docker start" requires at least 1 argument.
    See 'docker start --help'.
    
    Usage: docker start [OPTIONS] CONTAINER [CONTAINER...]
    
    Start one or more stopped containers
    01.03.2020 12:50:34 INFO: Script complete

     

  9. 3 hours ago, teh0wner said:

    I've had an issue with the mount script today.. complaining about cache running out of space.

    2020/03/01 09:17:44 ERROR : downloads/ABC.mkv: Failed to copy: multpart copy: write failed: write /root/.cache/rclone/vfs/google_drive_encrypted_vfs/ABC.mkv: no space left on device

    Is there a way I can change the cache location? AFAIK /root/.cache is on the sdcard, which barely has any space. Ideally this would be my cache disk.

    Fix Common Problems has flagged it up too - diagnostics attached.

     

    Got that working using --cache-dir in rclone_mount script during mounting, however, it (weirdly) seems mergerfs doesn't like that. I can see the rclone_mount just fine, but trying to access merger_fs mount, just hangs. Any ideas why this might be the case?


    Thanks

    r2-d2-diagnostics-20200301-1115.zip 90.67 kB · 0 downloads

    I'm also seeing an issue with mergerfs hanging @teh0wner. I was doing some maintenance last night and decided to update to the latest version of this script and could not access the mergerfs mount. It would peg a single core on the CPU and just hang. I reverted back to the older script, that had previously been working for a month+, and seeing the same issue. I blew away all directories (local, mount_rclone, mount_mergerfs, appdata/other) and clean reboot with same issue. Looking at the last few posts I'm wondering if it isn't something that changed with trapexit/mergerfs-static-build

  10. 1 hour ago, Abzstrak said:

    while plex is the most prevalent, people are seeing with with most apps that use sqlite.  I'm still on the trial and have little desire to spend $130 on a piece of software that doesn't work for my purpose.  I figured unraid would be easy and allow some extra power savings by spinning down drives, but I've been a linux system admin for 15 years, I'm strongly thinking of dumping it for a normal distro or going back to omv on the box. 

     

    It's also highly irritating that this seems to be fairly wide spread, but not being addressed by limetech.

    I can appreciate that...especially when trying to decide which OS to go with. I will say, I jumped to Unraid about 2.5 years ago and it has been fantastic and one of the easiest to administer. The few times I've had issues this community has been extremely quick in helping resolve. Hell, I've gotten better support here than I have for enterprise products I pay six figures for (I'm an IT Manager).

     

    I think with this particular issue it seems extremely hard to replicate. Plex corrupted on me but I'm also running lidarr, sonarr, radarr, bazarr, tautulli, syncthing, and pihole which all seem to be using sqlite to some extent...and I've no issues with those. Now that I took a closer look, pihole and syncthing are also using /mnt/user, instead of /mnt/cache, and they're fine so far (moving them now thought). So why did Plex corrupt and not the others?

    • Like 1
  11. Just came across this post and wanted to add another to the list. Plex has been running solid since my last server rebuild nearly 2.5 years ago. I upgraded to 6.7 a few days after release and a week or two later Plex became corrupt. If I recall it was shortly after an update to the Plex container...so I never suspected it as a potential Unraid issue. Since Plex had been running for so long, and I was kind of wanting to change some of my libraries around, I decided to just do a rebuild instead of restoring from backups.

     

    I also run Sonarr/Radarr/Lidarr and no issues....but they are all using /mnt/cache. I would have sworn the same was for Plex, but I just pulled down a backup prior to my rebuild and sure enough, I used /mnt/user. It was the first container I setup and probably didn't know any better at the time.

     

    I believe someone also mentioned something about CA Backup / Restore Appdata...I also run weekly backups of all containers...but do not recall if corruption happened after a backup. Even if it did this seems more like a possible correlation, but not the underlying cause.

     

    I know this is all anecdotal, but @CHBMB may be on to something with not only switching to /mnt/cache or disk, but also creating a clean install with clean database. So far, I've had no issues with Plex since my clean install almost three weeks ago.

    • Like 1
  12. 4 hours ago, DZMM said:

    Can you post the link to the post discussing this please as I couldn't find it.  I'm giving it a go on my 3xW10 VMs anyway as I've been seeing a lot of high CPU use and I've also been getting a lot of crashes and freezes, which I hope are related.

     

     

    @DZMM I think it was this one: https://forum.proxmox.com/threads/high-cpu-load-for-windows-10-guests-when-idle.44531/

    I'm testing right now to see if this resolves it for me...

    • Upvote 1
  13. I did some experimenting last night after upgrading unRAID to 6.5.2 and trying a clean install of Windows 10. Still see about 15%-20% CPU utilization...even though task manager within the VM shows nearly idle. 

    What I did:

    • I downloaded a clean Windows 10 ISO, which included the 1803 update.
    • Basically used the default VM template setting with the exception of using SeaBIOS and using virtio 0.1.126

    Seeing the same results... I checked and saw virtio stable is now up to 0.1.146, so I blew away the entire vm, reinstalled with 0.1.146 virtio (I have no idea is this could even cause the issue..) and still seeing the same 15%-20% CPU at idle.

     

    Doing some googlefu I found a couple folks posting similar issues with KVM on Ubuntu Server...no resolution that I could find, just wanted to share. 

  14. I'm also seeing this issue after the Windows 10 April (1803) update. Current VM has been solid for probably close to a year, and I noticed the issue immediately after 1803. Task Manager within VM shows essentially nothing going on...yet unRAID CPU shows around 20%. 

  15. I had the same issues as @rbroberts when updating the container a few months back. Everything would be working fine, then break after updating the container. After the first time, I blew everything away, did a clean setup, worked great until another update and it happened again, so I bailed on using it. I was mostly just screwing around with it and wasn't really interested in troubleshooting it. I din't keep any logs, so this is probably useless, other than stating I've also seen this same issue.

  16. Thank you johnnie.black! I would have never figured this one out on my own...and I've learned a little more than I wanted about btrfs than I ever wanted to :S

     

    15 hours ago, johnnie.black said:

    That's why I said you need to start small, I though that since it reallocated one chunk with 5 you could go straight to a much higher number, try d-usage=10, then 20 and you should be able to go to 80, if it still fails, try smaller increasing values until it works or you'll need to clear more space, when it's done schedule a weekly balance so it doens't happen again.

    I had misunderstood what you meant by start small. So, even though it failed at 80, it did balance a significant amount.  I went back and was  able to quickly increment up to about 70....then worked my way up to a 100% balance with no errors. Now I'm showing:

    root@Server:~# btrfs fi show /mnt/cache
    Label: none  uuid: 8df4175c-ffe2-44d7-91e2-fbb331319bed
            Total devices 1 FS bytes used 121.61GiB
            devid    1 size 232.89GiB used 125.02GiB path /dev/sdc1
    

    Thanks again for all the help!

  17. Well....

    root@Server:/mnt/user/james# btrfs balance start -dusage=80 /mnt/cache
    ERROR: error during balancing '/mnt/cache': No space left on device
    There may be more info in syslog - try dmesg | tail
    
    root@Server:/mnt/user/james# dmesg | tail
    [27506.319628] BTRFS info (device sdc1): relocating block group 58003030016 flags 1
    [27511.777268] BTRFS info (device sdc1): found 25126 extents
    [27606.230821] BTRFS info (device sdc1): found 25126 extents
    [27606.418496] BTRFS info (device sdc1): relocating block group 56929288192 flags 1
    [27627.136389] BTRFS info (device sdc1): found 30137 extents
    [27682.014305] BTRFS info (device sdc1): found 30137 extents
    [27682.216675] BTRFS info (device sdc1): relocating block group 55855546368 flags 1
    [27707.130530] BTRFS info (device sdc1): found 30129 extents
    [27773.906438] BTRFS info (device sdc1): found 30127 extents
    [27774.372412] BTRFS info (device sdc1): 3 enospc errors during balance
    

    Not sure what to do next...do I need to clear more space? That would mean moving off docker data in appdata or domains (Win10 / Xubuntu) vdisks.

  18. I'm confused....

     

    Before I did anything else:

    root@Server:~# btrfs fi show /mnt/cache
    Label: none  uuid: 8df4175c-ffe2-44d7-91e2-fbb331319bed
            Total devices 1 FS bytes used 131.82GiB
            devid    1 size 232.89GiB used 232.89GiB path /dev/sdc1

     

    After looking at /mnt/cache I forgot I have downloads sitting on cache drive...I deleted that (~11GB)

    I then ran the below command as suggested in the linked post

    root@Server:/mnt/cache/system# btrfs balance start -dusage=5 /mnt/cache
    Done, had to relocate 1 out of 236 chunks
    

     

    I then get:

    root@Server:/mnt/cache/system# btrfs fi show /mnt/cache
    Label: none  uuid: 8df4175c-ffe2-44d7-91e2-fbb331319bed
            Total devices 1 FS bytes used 120.47GiB
            devid    1 size 232.89GiB used 232.88GiB path /dev/sdc1

     

    I only have 4 shares on /mnt/cache:

    root@Server:/mnt/cache# du -sh /mnt/cache/appdata/
    38G     /mnt/cache/appdata/
    root@Server:/mnt/cache# du -sh /mnt/cache/domains/
    45G     /mnt/cache/domains/
    root@Server:/mnt/cache# du -sh /mnt/cache/downloads/
    205M    /mnt/cache/downloads/
    root@Server:/mnt/cache# du -sh /mnt/cache/system/
    26G     /mnt/cache/system/

    Which should add up to ~110GB used...

     

     

     

     

  19. Started getting a cache drive full error and docker/VMs stopping/pausing...however the cache disk shows plenty of free space. Server has been extremely stable in the current configuration since about Feb. Though, I did add  musicbranz/headphones dockers maybe 4-6 weeks ago.

    I did a reboot this morning (sorry, I'm from the Windows Server world...when things get squirrely it's time for a reboot) and this changed nothing.

    I also expanded the docker vdisk from 20GB to 25GB which also didn't help.

     

    Cache shouldn't ever get full before mover runs...I don't download/move much data around on average.

     

    Diagnostics are attached.

     

    server-diagnostics-20171118-1010.zip

    cache_storage.JPG

  20. I don't know that I need it....but what's the root password? I tried root with password 5iveL!fe and it doesn't seem work. Otherwise, I was able to setup and account and it's been working great.

     

    I'm an idiot...it's the first thing I setup after hitting the web gui :S I realized this after doing a clean install. Unfortunately after testing out for a week or two, I got a 502 error out of know where...never could get it to recover. Wasn't worth the effort, so I just re-installed.