karldonteljames

Members
  • Posts

    106
  • Joined

  • Last visited

Everything posted by karldonteljames

  1. After moving from a pfsense device to a udmp, I’m happy with most of the offering with the exception of no open vpn. This solution would have meant not having to implement more hardware. Part of the reason I went for a udmp was to slim down the hard ware I’m using, not remove one and have to add two. Unraid runs all the time anyway, so having this in a docker image seems like an ideal solution, especially as I can put the open vpn docker in my dam and tunnel traffic without too many concerns.
  2. I'm running a UDM Pro, But until i get the container to actually run as expectedin it's own i'm not able to help. Are you using the UDP in classic setting or modern settings? it might be worth double checking that rule is enabled. - It might also be worth trying to run in host mode rather than bridge mode. i'm not 100% sure but i think using it in bridge mode affects the way routing works.
  3. Same issue here. ``` Sorry, a session error has occurred It is possible that your session has expired or your login credentials do not allow access to this resource. See error text below for further details: SESSION ERROR: SESSION: Your session has expired, please reauthenticate (9007) ```
  4. Thanks I’ll take another look at it, is it possible to have it backup to google drive, or OneDrive, or would I need to use clone for that? Does it backup in open file format, so i could do an individual file / docker restore?
  5. It’s been a while since I used ca backup, and the last time I did it stopped all dockers and vm’s, is that still the case? I’m looking to try and backup my app data and unraid data to my google drive or onedrive, and trying to think of the best way of doing that without stopping all of my containers. Ideas and suggestions are appreciated.
  6. Im seeing an issue where i cannot connect to deluge, i'm getting connection refused. When I take a look a the logs i can see the following every couple of minutes: DEPRECATED OPTION: --cipher set to 'aes-256-gcm' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'aes-256-gcm' to --data-ciphers or change --cipher 'aes-256-gcm' to --data-ciphers-fallback 'aes-256-gcm' to silence this warning. The top of my config is setup as below: client dev tun proto udp remote sweden.privacy.network 1198 remote denmark.privacy.network 1198 remote man.privacy.network 1198 remote nl-amsterdam.privacy.network 1198 remote no.privacy.network 1198 remote brussels.privacy.network 1198 remote lu.privacy.network 1198 remote malta.privacy.network 1198 remote monaco.privacy.network 1198 resolv-retry infinite nobind persist-key cipher aes-256-gcm ncp-disable auth sha1 tls-client remote-cert-tls server auth-user-pass credentials.conf compress verb 1 <crl-verify> Any advice is appreciated.
  7. All Went without a with hitch. All drives replaced upgraded. I did one drive at a time, (I know I could have probably did two at once, but wanted to play it safe) Each new 10tb drive took about 22 hours to rebuild, so all in took a little over a week to replace all eight drives. The only other question i had was about automatically removing duplicates.
  8. That's fine, I'll stop my backup form running for a couple of days, that isn't a huge issue, not taking many pictures at the moment anyway. I don't think the cache is protected by the parity, (I have two cache drives so i think they protect each other? And all of my docker appdata is running from there.) So i can't see that it will make a huge difference. One other question. I ran some advanced tests on my unraid server using fix common problems, and it has detected multiple duplicate files across a few of the drives, is it possible to automatically remove the duplicates? I've tried to run dupeGuru, but it seems to lock up and not report anything.
  9. It looks pretty straight forward. Shutdown > Replace one parity drive > Restart > Assign new drive > Let unraid rebuild. Shutdown > Replace second parity drive > Restart > Assign new drive > Let unraid rebuild. Shutdown > Replace one non parity drive > Restart > Assign new drive > Let unraid rebuild > Repeat until all drives are upgraded. During this process i'm assuming that i cannot use any of my dockers is that correct? All of my dockers are running on my cache drives. > These will not be replaced. Is there anyway i can continue to use my dockers during the rebuild process?
  10. Thank you. I'll read through this. My plan is always to keep the disks as they are until the replacements are confirmed to be working ok.
  11. Good Morning all So I use my unraid server for basically docker containers and a backup target. Below shows my current usage, I have no more physical space in my server to install new drives, and have no additional SATA/SAS ports either. I want to upgrade all of the drives (with the exception of the cache) to 10tb drives. How would i go about swapping all of these drives out without data loss? Any help would be really appreciated.
  12. Does anyone know if MotionEye has an rtsp output? If I run the stream from a couple of my cameras directly into frigate, the cameras seem to restart every few minutes, i'm not sure why, but I've successfully replicated this by stopping the frigate container for a couple of hours with no restarts. within a few minutes of frigate being started the cameras restart. I am running motion eye, so would be happy to feed motion eye into frigate if possible. Any advice? Thanks.
  13. Thanks that got it started, I con't have the option for webui, but i can type that in manually i suppose. Now i have the following issue in the log. File "/opt/frigate/frigate/video.py", line 131, in __init__ self.regions = self.config['regions'] KeyError: 'regions' Thank you for your help. I really appreciate it.
  14. Thanks i think I've got most of the settings down: just not sure what this should be: I seem to be getting this error when i click apply: /usr/bin/docker: Error response from daemon: invalid volume specification: '/mnt/user/appdata/frigate:config:ro': invalid mount config for type "bind": invalid mount path: 'config' mount path must be absolute. See '/usr/bin/docker run --help'. The command failed.
  15. Thanks, Really sorry, but do i just run that in a command window? (obviously changing the path to appdata folder.) I've only ever added added docker containers via the apps tab!
  16. Thanks, I'm not sure how i find out what the volumes/paths would be?
  17. Good Evening would it be possible to get this docker for unraid please? https://github.com/blakeblackshear/frigate It would be a great addition for Home Automation.
  18. Thank you. Out of interest, do you know where we can request docker containers? I think something like this would be great to integrate in HA, but i have no idea how to build a container in unraid. https://github.com/blakeblackshear/frigate Thanks again. I'll take a look at that now.
  19. Man that really sucks, could you help me out with how you used curl please? I'm trying to use it for home assistant integration, i'm happy to install the mqtt in the docker if required, just not sure how to do it.
  20. Evening All, I'm trying to get Motion eye to publish an mqtt event when it detection motion, but it looks like Mosquitto isn't included, could someone tell me how i might be able to add this in please. The command i'm trying run on motion is: mosquitto_pub -h MQTTADDRESS -p 1883 -u MQTTUSERNAME -P MQTTPASSWORD -t cameras/cam1/motion -m "On" and motion stop: mosquitto_pub -h MQTTADDRESS -p 1883 -u MQTTUSERNAME -P MQTTPASSWORD -t cameras/cam1/motion -m "Off" I believe that mosquitto_pub is missing as if I run the this from the command I get an error sh: mosquitto_pub: not found
  21. Evening all, I updated sonarr and radarr today, and since then i'm getting: "Access to the path is denied." on all media that is trying to be imported, in both radarr and sonarr. Any idea why this might be? I don't think anything else has changed. I setup Ombi today, but i don't think that changed any of my config. EDIT: I'm watching one of the folders it is trying to copy videos into and it puts the partial file in, that grows in size, then disappears. UPDATE: Rolling back to docker pull binhex/arch-radarr:0.2.0.1450-1-01 and the previous version of Sonarr has allowed the files to be written to my media server.
  22. Evening All, My unraid server rebooted today, and when it came back up 4 of my disks are missing. I have the following in my unraid syslinux config: append iommu=pt initrd=/bzroot I'm hoping this isn't a hardware failure! I upgraded to 6.7.3rc2 and it didnt make any difference. Also, If I boot and leave it to boot normally on the blue screen to unraid, it says: loading bzroot.. ok loading bzroot(random characters)... Failed : No such file or directory if i select safemode of gui it seems to load, but still missing those four drives. removing the above line (append iommu=pt initrd=/bzroot) results is a failed boot that stops on a line that reads: ---[ end trace 7c9523f71dcfb5e2 ]--- i'm still able to boot into safemode, but still without the missing drives. Diagnostics are attached. Any help is appreciated. REALLY appreciated. maximus-diagnostics-20190801-1852.zip
  23. Adding in the deluge Privoxy setting into lidarr solved the issue.
  24. Evening, I'm trying to add a download client to lidarr, but I keep getting the error: Unknown exception. The Operation has timed out.: 'http://192.168.12.216:8112/json' Lidarr is setup with ip 192.168.12.215 Sonarr and radarr are both connect to deluge, and they are the same iprange (12.216 and 12.217) Any idea why this might be? I'm seeing this is in the logs: 2019-06-03 08:00:29,991 DEBG 'lidarr' stdout output: [Error] Deluge: Unable to test connection [v0.6.2.883] System.Net.WebException: The operation has timed out.: 'http://192.168.12.216:8112/json' ---> System.Net.WebException: The operation has timed out. at System.Net.HttpWebRequest.GetRequestStream () [0x0000f] in /build/mono/src/mono/mcs/class/System/System.Net/HttpWebRequest.cs:910 at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x000ef] in C:\projects\lidarr\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:68 --- End of inner exception stack trace --- at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x001c8] in C:\projects\lidarr\src\NzbDrone.Common\Http\Dispatchers\ManagedHttpDispatcher.cs:102 at NzbDrone.Common.Http.Dispatchers.FallbackHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x000b5] in C:\projects\lidarr\src\NzbDrone.Common\Http\Dispatchers\FallbackHttpDispatcher.cs:53 at NzbDrone.Common.Http.HttpClient.ExecuteRequest (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookieContainer) [0x0007e] in C:\projects\lidarr\src\NzbDrone.Common\Http\HttpClient.cs:121 at NzbDrone.Common.Http.HttpClient.Execute (NzbDrone.Common.Http.HttpRequest request) [0x00008] in C:\projects\lidarr\src\NzbDrone.Common\Http\HttpClient.cs:57 at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.AuthenticateClient (NzbDrone.Common.Http.JsonRpcRequestBuilder requestBuilder, NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings, System.Boolean reauthenticate) [0x0005b] in C:\projects\lidarr\src\NzbDrone.Core\Download\Clients\Deluge\DelugeProxy.cs:292 at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.BuildRequest (NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings) [0x0006b] in C:\projects\lidarr\src\NzbDrone.Core\Download\Clients\Deluge\DelugeProxy.cs:205 at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.ProcessRequest[TResult] (NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings, System.String method, System.Object[] arguments) [0x00000] in C:\projects\lidarr\src\NzbDrone.Core\Download\Clients\Deluge\DelugeProxy.cs:212 at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.GetVersion (NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings) [0x00000] in C:\projects\lidarr\src\NzbDrone.Core\Download\Clients\Deluge\DelugeProxy.cs:53 at NzbDrone.Core.Download.Clients.Deluge.Deluge.TestConnection () [0x00000] in C:\projects\lidarr\src\NzbDrone.Core\Download\Clients\Deluge\Deluge.cs:209 2019-06-03 08:00:29,997 DEBG 'lidarr' stdout output: [Warn] LidarrErrorPipeline: Invalid request Validation failed: I've made a little progress, If lidarr is running as "host" rather than on its own ipaddress it can connect to deluge. Can I add a path for the docker containers to connect to each other on their own subnet? If I disable the VPN on deluge or Qbit, it will also connect, if using its own IP. (I'm using binhex-delugevpn) I am able to ping the binhex-delugevpn container from the binhex-lidarr console, both by name and by IP address. Most of my dockers are assigned their own IP address in subnet, 192.168.12.0/24, This subnet is on a separate network port, running on VLAN 12. My Unraid server is on 192.168.10.0/24 (running a 4 port bonded network on vlan 10) Dockers: Docker settings: Routing Table: Docker Ethernet Port: Unraid Ethernet Port: