mgranger

Members
  • Posts

    168
  • Joined

  • Last visited

Everything posted by mgranger

  1. I don't see anything called snapshot or anything that lets me change how often an image is sent There is nothing about variable bit rate. Just the maximum My bit rate is about 496 kB/s for the one camera and 300 kB/s or less for the others. Total is 1.33 MB/s
  2. Well maybe I spoke too soon. I am still getting tearing although it is better than it was at first but it is still happening more than it should.
  3. I have all 4 cameras running right now and as far as the tearing it seems a lot better but everything is a lot lower quality compared to where I was previously. I have 2 fps (which is where i was before) Couldn't fine anything called snapshot I set the max bitrate to 2048 (I can't remember what it was previously but it was around 5120 or 6144) The resolution is now 1080p (1920x1072) (it was previously 2560x 1440) I set the color to 8 bit (previously it was 32 bit) Also now i am just recording rather than MoCord I have a couple other dockers running on my system but nothing that is really taking many resources up (Emby but nothing streaming at the time, no VM's) My cameras are Reolink RLC-422 and Reolink RLC-410. They are all wired. I will attache my Unraid Diagnostics as well. finalizer-diagnostics-20181019-1000.zip
  4. So is there a way to go back to 1.30. I don't even care if i have to do a new database and just start from scratch. i am too frustrated with 1.32.
  5. My processor load seems all over the place. Before I updated to 1.32 the load in zoneminder was 1 or 1.5 with my 4 cameras now after the update it seems closer to high 4 or middle 5's. This is with Mocord. If I set it to just record it does come down quite a bit (which makes sense) my first to cameras say they are at or below 2 fps within zoneminder. My 3rd camera says around 140 fps and the 4th is 1000 fps which is wrong. All are actually set at 2fps in the camera so not sure why they say they are so high.
  6. I have not looked at any ne gr working issues yet but if I pull up the video feed of the camera from the website I get a feed no problem and at the same time I look at the zoneminder image and I am having a problem so it makes me think the network is not the issue. It seems like it is something in zoneminder/unraid somewhere.
  7. I just deleted it and reinstalled and it is already failing. I am using ffmpeg and tcp. i didn't see anything that looks different/wrong but obviously something is off.
  8. Is there a way to go back to the old repository before 1.32? I was just going to see if going back helped fix the issue.
  9. Ok i took the SHMEM% to 25 and pinned 2,6 and 3,7. (Yes i am on 6.6) My /dev/shm went up to 35% which makes sense since i lowered the SHMEM% but I am still seeing the same problem on my images.
  10. Ok i turned down the SHMEM% to 50% which was the default. My /dev/shm percent is hovering around 18% which seems good enough. I will try to pin some CPUs to zoneminder if I can figure it out.
  11. This is what I am seeing. I restart the docker and it seems to fix it for awhile but then it eventually goes to this or sometimes worse. This happens to all 4 of my cameras or sometimes just a portion of them. I have already increased the SHMEM to 60% (I have 32 GB on the system)
  12. So i updated to the new 1.32.2 and now I am having issues with the images being blurry. The update before the new layout i didn't have any problems with the blurry images. Is there something that changed that is causing this?
  13. I figured it out. In my default ovpn file i had to list out the password.txt file next to authentication line. Seemed to fix it.
  14. When i use the transmission_vpn i keep getting the following error Using OpenVPN provider: NORDVPN Starting OpenVPN using config default.ovpn Setting OPENVPN credentials... adding route to local network 192.168.1.0/24 via 172.17.0.1 dev eth0 Wed Sep 26 04:42:04 2018 OpenVPN 2.4.6 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Apr 24 2018 Wed Sep 26 04:42:04 2018 library versions: OpenSSL 1.0.2g 1 Mar 2016, LZO 2.08 Wed Sep 26 04:42:04 2018 neither stdin nor stderr are a tty device and you have neither a controlling tty nor systemd - can't ask for 'Enter Auth Username:'. If you used --daemon, you need to use --askpass to make passphrase-protected keys work, and you can not use --auth-nocache. Wed Sep 26 04:42:04 2018 Exiting due to fatal error
  15. I have been trying to get nextcloud set up with a reverse proxy with the spaceinvader tutorial however when I do this I keep getting a '500 internal server error' I take a look in the error.log and there is nothing there. What could I be doing wrong? Here is my nextcloud.subdomain.conf: # make sure that your dns has a cname set for nextcloud # edit your nextcloud container's /config/www/nextcloud/config/config.php file and change the server address info as described # at the end of the following article: https://blog.linuxserver.io/2017/05/10/installing-nextcloud-on-unraid-with-letsencrypt-reverse-proxy/ server { listen 443 ssl; server_name nextcloud.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_nextcloud nextcloud; proxy_max_temp_file_size 2048m; proxy_pass https://$upstream_nextcloud:443; } } Here is my config.php file: I changed my url from what it actually is to test.: <?php $CONFIG = array ( 'memcache.local' => '\\OC\\Memcache\\APCu', 'datadirectory' => '/data', 'instanceid' => 'oc5vpjwh780k', 'passwordsalt' => 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'secret' => 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', 'trusted_domains' => array ( 0 => '192.168.1.100:448', 1 => 'nextcloud.test.com', ), 'overwrite.cli.url' => 'https://nextcloud.test.com', 'overwritehost' => 'nextcloud.test.com', 'overwriteprotocol' => 'https', 'dbtype' => 'mysql', 'version' => '13.0.5.2', 'dbname' => 'nextcloud', 'dbhost' => '192.168.1.100:3306', 'dbport' => '', 'dbtableprefix' => 'oc_', 'dbuser' => 'user', 'dbpassword' => 'XXXXXXXXXXXXXXX', 'installed' => true, );
  16. I have the same issue. Did you ever figure it out?
  17. I am getting an issue where one or two of my cameras seem to be failing quite often. I don't think my system is overloaded but something seems to be happening. I only have 4 cameras that I am using and 32 GB of RAM which I am sharing 65%. My "South Cam Day" seems to be dropping intermittently as well as my "Driveway Cam Day" Here is a portion of my log file. Not sure if anyone on here can help me out. zm-log1.txt
  18. I went back to 0.61.1 and it seems to be working so it must be something changed within the docker which causes the error. Edit: Well i am still getting the error in 0.61.1 however it seems to be less often and the automation seems to still work probably because it doesn't cut out as often and not at the time it was running the automation. Not sure why it seems better in 0.61.1 but it does seem a little better
  19. I am having a lot of issues with this container. The honeywell component went seem to stay connected 2018-04-14 09:06:34 WARNING (MainThread) [homeassistant.helpers.condition] Value cannot be processed as a number: 2018-04-14 09:06:34 WARNING (MainThread) [homeassistant.helpers.condition] Value cannot be processed as a number: 2018-04-14 09:06:34 WARNING (MainThread) [homeassistant.helpers.condition] Value cannot be processed as a number: 2018-04-14 09:52:52 ERROR (MainThread) [homeassistant.components.sensor.wunderground] Error fetching WUnderground data: ContentTypeError("0, message='Attempt to decode JSON with unexpected mimetype: '",) 2018-04-14 10:06:43 ERROR (SyncWorker_7) [homeassistant.components.climate.honeywell] SomeComfort update failed, Retrying - Error: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) 2018-04-14 10:16:45 ERROR (SyncWorker_37) [homeassistant.components.climate.honeywell] SomeComfort update failed, Retrying - Error: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) 2018-04-14 10:26:48 ERROR (SyncWorker_34) [homeassistant.components.climate.honeywell] SomeComfort update failed, Retrying - Error: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) 2018-04-14 10:30:19 ERROR (SyncWorker_22) [homeassistant.core] Error doing job: Task was destroyed but it is pending! 2018-04-14 10:30:19 ERROR (SyncWorker_22) [homeassistant.core] Error doing job: Task exception was never retrieved RuntimeError: cannot reuse already awaited coroutine 2018-04-14 10:36:47 ERROR (SyncWorker_16) [homeassistant.components.climate.honeywell] SomeComfort update failed, Retrying - Error: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) 2018-04-14 10:56:51 ERROR (SyncWorker_11) [homeassistant.components.climate.honeywell] SomeComfort update failed, Retrying - Error: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) 2018-04-14 11:06:52 ERROR (SyncWorker_7) [homeassistant.components.climate.honeywell] SomeComfort update failed, Retrying - Error: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) 2018-04-14 11:26:56 ERROR (SyncWorker_36) [homeassistant.components.climate.honeywell] SomeComfort update failed, Retrying - Error: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) 2018-04-14 11:46:33 WARNING (MainThread) [homeassistant.helpers.entity] Update of sensor.dark_sky_precip_intensity is taking over 10 seconds 2018-04-14 11:46:54 WARNING (MainThread) [homeassistant.components.sensor] Updating darksky sensor took longer than the scheduled update interval 0:00:30 2018-04-14 11:47:00 ERROR (SyncWorker_31) [homeassistant.components.climate.honeywell] SomeComfort update failed, Retrying - Error: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) 2018-04-14 11:51:35 WARNING (MainThread) [homeassistant.helpers.entity] Update of sensor.pws_precip_today_in is taking over 10 seconds 2018-04-14 11:51:35 ERROR (MainThread) [homeassistant.components.sensor.wunderground] Error fetching WUnderground data: TimeoutError() 2018-04-14 14:00:29 WARNING (MainThread) [homeassistant.helpers.entity] Update of sensor.pws_feelslike_f is taking over 10 seconds 2018-04-14 14:00:29 ERROR (MainThread) [homeassistant.components.sensor.wunderground] Error fetching WUnderground data: TimeoutError() 2018-04-14 14:36:34 WARNING (MainThread) [homeassistant.helpers.entity] Update of sensor.pws_temp_high_record_f is taking over 10 seconds 2018-04-14 14:36:34 ERROR (MainThread) [homeassistant.components.sensor.wunderground] Error fetching WUnderground data: TimeoutError()
  20. Are there supposed to be python packages with Home Assistant in Unraid. I looked at my deps/lib/python3.6 folder is something I added for my Sony TV. When I used home assistant prior to unraid there were a lot more folders in here. I copied my files over however did not copy the python folder over which was 3.5. I thought this would populate automatically however nothing is showing up. I am getting errors for my honeywell thermostat and myq garage door and a chromecast and I was wondering if it was related to not having these python files.
  21. Not sure if this helps but I try to follow along with rsync and watch as it adds files. It seems that one file gets missed and then from then the rsync keeps working but never adds anything else and will eventually produce and error after some time saying that this file was an error.
  22. Ok so I think i figured out getting rsync to work with a local source disk and a unassigned device for the destination. I am now trying to rsync a local source disk to a unassigned smb share but I am really struggling. It seems to be working sort of but i have some major concerns. I basically use the same code as above except modified it a little to sync to the share folder. So it looks like the following #!/bin/bash Source2=/mnt/user/Media Destination2=/mnt/disks/FinalizerMediaBackup/Media ##################### ### Destination 2 ### ##################### ### Source 2 ### rm -rf "$Destination2"/Daily5 mv "$Destination2"/Daily4 "$Destination2"/Daily5 mv "$Destination2"/Daily3 "$Destination2"/Daily4 mv "$Destination2"/Daily2 "$Destination2"/Daily3 mv "$Destination2"/Daily1 "$Destination2"/Daily2 rsync -avh --delete --link-dest="$Destination2"/Daily1/ "$Source2"/ "$Destination2"/Daily0/ |& logger When I run this the folders are populated on the share and it starts to run however a couple minutes in I get an error message that pops up and the process stops. The message is root: rsync error: error in file IO (code 11) at receiver.c(853) (receiver=3.1.3) I have ran it a couple of times and it seems to stop on the same file however I looked at the permissions of the original file and it seems to be the same as the others. If I try running the script again it acts like it is running but then weird things start happening and it will start removing some of the files and eventually alarm out on files that were originally there but has since removed. This doesn't really make sense to me yet. The weird thing is I have copied the original source to a local unassigned device without any errors but when I try to do it over the network it errors out. My locations are correct because I see the files/folders start to populate. Is there something I am missing. The destination is a Windows PC which is sharing the drive and the drive is a NTFS drive which is mounted in unraid using the unassigned devices SMB shares.
  23. So i saw somewhere online that to help troubleshoot to run a test using AJA System Test. Not sure if it will help here or not but I will post my results just in case it does. It looks like I should be cable of getting better than the 80 MB/s that it keeps dropping to. MILLENNIUM-FALC.nfo MILLENNIUM-FALC.pdf EDIT: So I have transferred a file back and forth a couple of times today just to see if anything has changed. It seems to be transferring at around 400 MB/s from Unraid to Windows PC and around 300 MB/s from PC to Unraid. Not sure why all of a sudden it is working now but it seems to be better for some reason. I will keep an eye on it but I have a feeling something is still not quite right but until then thanks for all the help.