• Posts

  • Joined

  • Last visited

Everything posted by casperse

  1. Thanks! I found this line in the rtorrent.rc file: # Port range to use for listening. # #network.port_range.set = 49160-49160 So is it enough to just uncomment this line and set it to ex. network.port_range.set = 58550-58550
  2. Thanks @ReDew that must have ben it! - during re-install I kept it default to /Data no "-" on the incomplete folder like before I have it downloading but its not uploading any data (0) so I am back to ports opening? How to check if I am connectable to seed? Running through PIA Wiregurad what ports should I create NAT for? I am running in "Bridge mode" and not on Proxynet
  3. Thanks for helping me! I did a remove all docker templates and docker image and did yet another re-install and I also removed the "perms.txt" just to make sure perms where not the problem and I got the UI to start! ๐Ÿ™‚ But my speeds are really slow? So I wonder if I am missing any portforwarding? So far I thought I only needed the wireguard port 51820/UDP? Should it be the whole range 51820 - 65535 UDP? Other ports shouldnt be needed when I only want it to use the Wireguard VPN right? I Also have Strict Port forwarding set to yes... So close now... LOL
  4. I really need some help trouble shooting this! 1) I got the delugevpn working but I would like to use the rTorrent ๐Ÿ™‚ 2) I had this working a year ago, but after doing a total re-install I cant get the rTorrent to run? (No VPN or Strict port forwarding) My configuration is as shown below: Log: Error log: 1641568550 C Caught exception: 'Error in option file: ~/.rtorrent.rc:61: Bad return code.'. The access rights is fine (even did a chmod 777, just to make sure)
  5. I keep getting this error: Googling it said that the file should be chmod 600 I did this but I still get this error? I am using the same conf file for wirguard as used in Binhex but I can see that he defines the user and password in the docker and you write only for openvpn? So is the conf file different? sorry havent found much information about this ๐Ÿ™‚
  6. Perfect! - Sorry for the late answer I had to many Christmas duties LOL I didn't find this option before? I have now done this for both USB needed in the Home assistant! Guess this will solve all my problems thanks allot! This is a great plugin for Unraid
  7. Thanks! seems to be working ๐Ÿ™‚ After first reboot it only mapped one of the USB drives but after the secound it looks to be fine, both are mapped: Under the VM's and the USB manager Sigma designs is listed as 1-11 0658_0200 but all others USB's use the vendor name? Could this have anything to do with this one not being attached after reboot?
  8. I have removed the USB X from the VM UI but I can see some USB settings in the XML view? Also here if I try to detach and retach the USB Another thing is that this plugin doesn't list the USB name of the Z-wave stick but I dont think thats related to the error? The plugin shows the Sigma as: Should I bind the driver before using the tool to attach the usb to the VM?
  9. This is a Great application! Having Home assistant on a VM with USB's for Zigbe and Z-wave is very troublesome! I have never gotten the VM setting to work? after every reboot I needed another plugin to detach and re-attach the USB's I now have your plugin installed and it seems to work but I have warnings here: Should I just remove all USB settings from the VM config and let the plugin handle things? What does the warning mean? Again thanks for creating this app its very cool!
  10. Almost ready to give up and move on to another solution but I must admit I have most trust on Binhex and the security built in these dockers...Can anyone give any input on why this isnt working? Cheers
  11. I have tried changing the NAME SERVERS from,,,,,,, To:,,,,, Still cant start it....tried binhex-qbittorrentvpn and that also works just not rTorrent? Update: I have them all on my "Proxynet" and only the binhex-privoxyvpn on "Bridge" (I have 5 PIA licenses but I cant run Deluge & privoxyvpn at the same time? My ports for the rTorrent is matching the ones on the docker: Docker: I can also see that the wg0.conf is generated ! Anyone have any input to what I am doing wrong? ๐Ÿ™‚ Logfile: I did try to add the above port:49184 but that didnt work either
  12. Just moved back to PIA (Black Friday) and I am following the Wireguard guide and I have everything working for Deluge But for some reason rTorrent just dosent want to startup? I have checked that the ports are open (Same as before) and I have enabled debug. Also tried to re-install the docker and removed the App data Error Log: Tried to change so many things (I was sure I could get this working, if I just kept trying LOL)
  13. Would this docker work for this problem?
  14. Thanks for the info and your work creating this docker! I also think Trilium looks more modern but Joplin did have a native app for both Android and IOS - I cant find any apps for mobile devices only desktop apps?
  15. I am now facing this very old problem again after updating my Nextcloud to version 22.2.3 and moving from SWAG to Nginx Proxy Manager I deleted the default config: \appdata\nextcloud\nginx\site-confs\default and got the new version and that removed the dreeded error: Your web server is not properly set up to resolve "/.well-known/webfinger". Further information can be found in the documentation But I got the other old error instead: The "Strict-Transport-Security" HTTP header is not set to at least "15552000" seconds. For enhanced security, it is recommended to enable HSTS as described in the security tips So I went back into the default configuration and uncommented the line: add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always; Which removed the error but resulted in me getting the old error back: So I am in a Time-loop and just cant get rid of these two errors any input on how to solve this? My Nginx Proxy Manager is really simple: And the default file: upstream php-handler { server; } server { listen 80; listen [::]:80; server_name _; return 301 https://$host$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name _; ssl_certificate /config/keys/cert.crt; ssl_certificate_key /config/keys/cert.key; # Add headers to serve security related headers # Before enabling Strict-Transport-Security headers please read into this # topic first. add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always; # # WARNING: Only add the preload option once you read about # the consequences in This option # will add the domain to a hardcoded list that is shipped # in all major browsers and getting removed from this list # could take several months. # set max upload size client_max_body_size 512M; fastcgi_buffers 64 4K; # Enable gzip but do not remove ETag headers gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/ application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; # HTTP response headers borrowed from Nextcloud `.htaccess` add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; # Remove X-Powered-By, which is an information leak fastcgi_hide_header X-Powered-By; root /config/www/nextcloud/; # display real ip in nginx logs when connected through reverse proxy via docker network set_real_ip_from; real_ip_header X-Forwarded-For; # Specify how to handle directories -- specifying `/index.php$request_uri` # here as the fallback means that Nginx always exhibits the desired behaviour # when a client requests a path that corresponds to a directory that exists # on the server. In particular, if that directory contains an index.php file, # that file is correctly served; if it doesn't, then the request is passed to # the front-end controller. This consistent behaviour means that we don't need # to specify custom rules for certain paths (e.g. images and other assets, # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus # `try_files $uri $uri/ /index.php$request_uri` # always provides the desired behaviour. index index.php index.html /index.php$request_uri; # Rule borrowed from `.htaccess` to handle Microsoft DAV clients location = / { if ( $http_user_agent ~ ^DavClnt ) { return 302 /remote.php/webdav/$is_args$args; } } location = /robots.txt { allow all; log_not_found off; access_log off; } # Make a regex exception for `/.well-known` so that clients can still # access it despite the existence of the regex rule # `location ~ /(\.|autotest|...)` which would otherwise handle requests # for `/.well-known`. location ^~ /.well-known { # The following 6 rules are borrowed from `.htaccess` location = /.well-known/carddav { return 301 /remote.php/dav/; } location = /.well-known/caldav { return 301 /remote.php/dav/; } # Anything else is dynamically handled by Nextcloud location ^~ /.well-known { return 301 /index.php$uri; } try_files $uri $uri/ =404; } # Rules borrowed from `.htaccess` to hide certain paths from clients location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; } location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; } # Ensure this block, which passes PHP files to the PHP process, is above the blocks # which handle static assets (as seen below). If this block is not declared first, # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php` # to the URI, resulting in a HTTP 500 error response. location ~ \.php(?:$|/) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice fastcgi_param front_controller_active true; # Enable pretty urls fastcgi_pass php-handler; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ \.(?:css|js|svg|gif)$ { try_files $uri /index.php$request_uri; expires 6M; # Cache-Control policy borrowed from `.htaccess` access_log off; # Optional: Don't log access to assets } location ~ \.woff2?$ { try_files $uri /index.php$request_uri; expires 7d; # Cache-Control policy borrowed from `.htaccess` access_log off; # Optional: Don't log access to assets } location / { try_files $uri $uri/ /index.php$request_uri; } }
  16. Hi @CorneliousJD I am looking to get away from my Synology DS Note station application Its one app that synchronize between everything IOS/Android Apps and a Desktop App - and it have a webUI that I can access through a proxy I am not sure but would this then require all these 4 dockers: joplin/server:latest postgresql14-joplin acaranta/docker-joplin The last one I couldn't find in the Unraid store? And this would provide the same functionality? (Except the webUI would be through VNC? Sorry I have been reading and its not really stated very clearly whats required? or I am just not finding it in my searches Br Casperse
  17. I am also looking to migrate from DSNote (Running on a VM on Unraid (Just for this and Photo station) So my goal is to find replacements and ditch the VM I actually found your post by searching for Joplin! Question is it correct that in order to run the Joplin server you need to first setup a Postgres Database docker? For such a small note app I find it strange that it would need a separate DB? I want a solution that synch across PC/IOS/WEBPAGE and with support for saving webpages to the note app So far it seems that Joplin crosses all the boxes, but I haven't tested it yet
  18. I came to the same conclusion I did a test of 3 streams and one download, and since I have installed and are running allot more dockers and never had a issue utilizing the max 8G for /tmp/PlexRamScratch it "almost" utilized the server 98% I decided to disable all synch/download on the Plex server for now (I don't want to loose the gain I have using RAM) Lets hope Plex creates a new path in the future for these synch/download conversions
  19. So basically I should just keep the existing solution? - no benefits but simplicity (I like to see how much Plex/Emby uses of my memory) I haven't used this sync/download feature for a very very long time (years) long before setting RAM up for transcoding But I know Plex just released a major update on how it works now: I cant see a way to define a separate download path in Plex So my only solution would be to upgrade my memory to 128GB ๐Ÿค‘ or move transcoding to my NVMe cache drive๐Ÿ˜–
  20. Hi @mgutt I think you made some changes to your guide since last time. ๐Ÿ˜„ And I now have some issues after I enabled "download" option from Plex locally to a few family members iPad's they now get a error message stating: Playback error - Not enough diskspace to convert this file/subject (Translating) I then checked my RAM usage and it was +99% and in the "PlexRamScratch" folder (I use your script to create two dirs on tmp one for Plex & Emby PlexRamScratch Script runs at boot: #!/bin/bash mkdir /tmp/PlexRamScratch chmod -R 777 /tmp/PlexRamScratch mount -t tmpfs -o size=8g tmpfs /tmp/PlexRamScratch My existing extra parameters PLEX: --runtime=nvidia --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped So I guess this would be my new extra parameters of PLEX?: --runtime=nvidia --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --mount type=tmpfs,destination=/tmp/PlexRamScratch,tmpfs-size=8000000000 --restart unless-stopped PLEX docker mount as before: Container Path: /transcode --> /tmp/PlexRamScratch Would this be enough to fix these new Download/transcode media files for local devices? Thanks for spending the time to share all this info with everyone!
  21. I love all the Unraid plugins but I think I have some conflict between them but I cant find out what it is? ๐Ÿ™‚ It just happened again and it just occurred that it might be something I should post here instead: During the night all my dockers dosent auto start again?
  22. Hi binhex I think you are the one to ask Over some years I have transferred data one way --> to my Unraid server FTP, Synchting, Resilio Sync, and latest LFTP using Seedsync Docker & Ubuntu VM Last one very unstable) Could I use your binhex rclone to connect to my off-site server and do a oneway synct to my unraid server? I have read all of the post and they are mostly about google drive and big vendors? I was able to install rclone on the remote server doing this: Would this make it possible to use your docker? And would the configuration be simpler? Sorry I am still trying to get my head around settings this up to just run and work (FAST
  23. @trurl you wrote above that you can just replace both drives (I also need to replace both my Parity drives with larger ones ๐Ÿ™‚) Would it be safer to replace and rebuild with only one parity drive at a time? or is the risk the same?
  24. Hi Everyone I really hope someone can help me out? After update my dockers is not started up again? Status shows exited 3 hours ago for all of them and that match the auto update Logfile: