stormshaker

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by stormshaker

  1. Hardware tone mapping isn’t working for me - Plex drops back to CPU transcoding. GTX-1650, Unraid 6.9.2 pms official docker - on the latest branch latest production nvidia-driver - 470.63.1 hardware transcoding working fine Do I have to use the linuxserver.io container? Looks like other people have HDR tonemapping working with the official docker. J
  2. I hear your pain. I installed my first Protect setup last month, and it’s been good. I like the UI and the iOS app. I like that it has a battery for graceful shutdowns and that the Unifi Network controller is built-in. It’s fast to connect to from remote. Downside is no storage redundancy, and I couldn’t get it to power over USB (tip, just buy the correct Ubiquiti POE injector, it’s cheaper than a QC USB charger, anyway). It’s not that expensive, considering maintenance should be much easier (in fact, it can keep itself completely to date automatically, unlike the Network controller). UBNT have a single wireless camera model, but it’s for indoor use only.
  3. Mine also woke me up last night with the same failure. Is it just a coincidence they failed at the same time, or did the same happen to others? TBH, I think I’ll move to Protect on the CK-Gen2+, Ubiquiti seems to actually be putting some effort into that.
  4. For me, a fresh install on :beta was showing web UI version 3.10.0-beta3 but Unifi Controller v3.9.12. Cameras upgraded to firmware 4.8.40. I in-place switched to :testing, and now both the Web UI and Controller versions showing 3.10.1.
  5. I switched from :latest to :beta and it stuck on Upgrading in the WebUI. /var/lib/unifi-video/logs/mongod.log says: 2019-02-16T13:04:35.783+1030 I CONTROL [initandlisten] options: { config: "/usr/lib/unifi-video/conf/mongodv3.0+.conf", net: { bindIp: "127.0.0.1", http: { enabled: false }, port: 7441 }, operationProfiling: { mode: "off", slowOpThresholdMs: 500 }, storage: { dbPath: "/usr/lib/unifi-video/data/db", engine: "mmapv1", journal: { enabled: true }, mmapv1: { preallocDataFiles: true, smallFiles: true }, syncPeriodSecs: 60.0 }, systemLog: { destination: "file", logAppend: true, path: "logs/mongod.log" } } 2019-02-16T13:04:35.802+1030 I STORAGE [initandlisten] exception in initAndListen: 28662 Cannot start server. Detected data files in /usr/lib/unifi-video/data/db created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating My install isn't critical, so I might try a fresh install on the :beta branch.
  6. Hmm, the docker updated but I still have 3.9.12. There are no in-app updates available, either.
  7. Looks like 3.10.1 has been approved for general release: https://community.ubnt.com/t5/UniFi-Video-Blog/UniFi-Video-3-10-1-Full-Release/ba-p/2674717 Any chance you could update the docker? Thanks, James
  8. Just to close this off - I found I had a 'umask 0700' in my go start script, as part of copying the SSH key in. All sorted now.
  9. Any more thoughts on this one? After reboot I can't open the WebGUI until I open an ssh session and: chmod go+r /etc/nginx/htpasswd . Thanks, James
  10. Diagnostics attached. FYI - emhttp generated the file but then another nginx permissions error caused a 403 to be returned in the webGUI. nas-diagnostics-20180130-1308.zip
  11. Hi, I have a couple of permission issues with the WebGUI / nginx after the 6.4 upgrade. On startup: Jan 30 11:35:57 nas nginx: 2018/01/30 11:35:57 [crit] 3899#3899: *162 open() "/etc/nginx/htpasswd" failed (13: Permission denied), client: 172.16.20.7, server: , request: "POST /webGui/include/ProcessStatus.php HTTP/1.1", host: "nas", referrer: "http://nas/webGui/include/Boot.php" If I chmod 644 /etc/nginx/htpasswd, I can login to the webGUI until next reboot. But then docker icons are missing, caused by this (one for each docker logged): Jan 30 11:10:45 nas nginx: 2018/01/30 11:10:45 [crit] 3766#3766: *2187749 stat() "/usr/local/emhttp/state/plugins/dynamix.docker.manager/images/linuxserver-radarr-latest-icon.png" failed (13: Permission denied), client: 172.16.20.7, server: , request: "GET /state/plugins/dynamix.docker.manager/images/linuxserver-radarr-latest-icon.png HTTP/1.1", host: "nas", referrer: "http://nas/Dashboard" How can I fix more permanently? Thanks, James
  12. Hi, My server has been crashing a bit recently - maybe once a week. There seem to be a lot of these errors in the syslog: Jul 6 14:48:27 nas kernel: CPU: 0 PID: 8477 Comm: CPU 0/KVM Tainted: G W 4.9.30-unRAID #1 And the call trace looks like below. One of my VMs? Jul 6 14:48:27 nas kernel: Call Trace: Jul 6 14:48:27 nas kernel: [<ffffffff813a4a1b>] dump_stack+0x61/0x7e Jul 6 14:48:27 nas kernel: [<ffffffff810cb5b1>] warn_alloc+0x102/0x116 Jul 6 14:48:27 nas kernel: [<ffffffff810cc07f>] __alloc_pages_nodemask+0xa59/0xc71 Jul 6 14:48:27 nas kernel: [<ffffffff81108678>] kmalloc_large_node+0x54/0x82 Jul 6 14:48:27 nas kernel: [<ffffffff8110b1be>] __kmalloc_node+0x22/0x132 Jul 6 14:48:27 nas kernel: [<ffffffff8100c8f6>] reserve_ds_buffers+0x192/0x353 Jul 6 14:48:27 nas kernel: [<ffffffff8100460f>] x86_reserve_hardware+0x135/0x147 Jul 6 14:48:27 nas kernel: [<ffffffff81004671>] x86_pmu_event_init+0x50/0x1c9 Jul 6 14:48:27 nas kernel: [<ffffffff810b8208>] perf_try_init_event+0x41/0x72 Jul 6 14:48:27 nas kernel: [<ffffffff810b8af3>] perf_event_alloc+0x482/0x7ca Jul 6 14:48:27 nas kernel: [<ffffffffa0182ec7>] ? reprogram_counter+0x54/0x54 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffff810ba776>] perf_event_create_kernel_counter+0x24/0x120 Jul 6 14:48:27 nas kernel: [<ffffffffa0182c10>] pmc_reprogram_counter+0xbf/0x104 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffffa0182d7e>] reprogram_gp_counter+0x129/0x146 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffffa0182ea6>] reprogram_counter+0x33/0x54 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffffa030c1fd>] intel_pmu_set_msr+0x16a/0x2ca [kvm_intel] Jul 6 14:48:27 nas kernel: [<ffffffffa01830c6>] kvm_pmu_set_msr+0x15/0x17 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffffa0163eb2>] kvm_set_msr_common+0x9d2/0xa64 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffffa030b58b>] vmx_set_msr+0x3cc/0x3de [kvm_intel] Jul 6 14:48:27 nas kernel: [<ffffffffa0160741>] kvm_set_msr+0x61/0x63 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffffa03041e9>] handle_wrmsr+0x3b/0x62 [kvm_intel] Jul 6 14:48:27 nas kernel: [<ffffffffa030b055>] vmx_handle_exit+0xf9d/0x1035 [kvm_intel] Jul 6 14:48:27 nas kernel: [<ffffffffa016a8a9>] kvm_arch_vcpu_ioctl_run+0x357/0x1165 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffffa0302f4c>] ? __vmx_load_host_state.part.27+0x120/0x127 [kvm_intel] Jul 6 14:48:27 nas kernel: [<ffffffffa0165088>] ? kvm_arch_vcpu_load+0xea/0x1a0 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffffa015ad71>] kvm_vcpu_ioctl+0x178/0x499 [kvm] Jul 6 14:48:27 nas kernel: [<ffffffff810b4c8f>] ? perf_ctx_unlock+0x1f/0x22 Jul 6 14:48:27 nas kernel: [<ffffffff810bd85b>] ? __perf_event_task_sched_in+0xd0/0x135 Jul 6 14:48:27 nas kernel: [<ffffffff81130112>] vfs_ioctl+0x13/0x2f Jul 6 14:48:27 nas kernel: [<ffffffff81130642>] do_vfs_ioctl+0x49c/0x50a Jul 6 14:48:27 nas kernel: [<ffffffff8113921f>] ? __fget+0x72/0x7e Jul 6 14:48:27 nas kernel: [<ffffffff811306ee>] SyS_ioctl+0x3e/0x5c Jul 6 14:48:27 nas kernel: [<ffffffff8167f537>] entry_SYSCALL_64_fastpath+0x1a/0xa9 nas-diagnostics-20170706-1453.zip
  13. Personally, I think this docker is more stable. I was having issues before with the docker unexpectedly stopping and this one has been rock-solid. Time will tell of course One thing, I don't think the Crashplan de-dupe worked when switching between the two dockers. I'm pretty sure Crashplan is re-uploading my entire backup set, even though I've left the old Missing paths in the backup set, as per jlesage's instructions. There are a lot of files that it seems to skip over, but there's also a lot that it Sends. But that's OK, it will finish eventually J
  14. Thankyou... updated successfully! J
  15. Hi, The container is running nicely - very stable over the last week or so. Question: Should it self-update to the latest clients, or do you need to push a container update for that to happen? I have a grey message 'CrashPlan Upgrade Failed' at the top of the Backup screen, with the Crashplan log saying 'Download of upgrade failed - version 1436674800483, connection lost.' Could very well be a networking/firewall issue on my side but thought I'd check. Thanks, James
  16. Yes... that was it! I didn't realise there are individual inbound backup locations separate to the global location. Adjusted that, all working again. Thanks!
  17. I'm having trouble with inbound backups, however. Crashplan says 'backup location is not accessible' - although it can browse /backupArchives fine, and file/folder permissions there are 777. @Djoss is Inbound working for you?
  18. Went through the adoption process and the backup is now running. Difficult to tell if the deduplication is actually working, although my firewall stats says 12GB uploaded over the last 12 hours and Crashplan says 115GB completed - so I guess it is. Very happy to see a new Crashplan docker
  19. Yea - it's running fine. OK, will ignore the FCP report. Thanks for your help
  20. Hi all, I'm cleaning up my array, using Fix Common Problems' Extended Test to help me identify issues. One thing it's saying is: I'd like to set this back to default ownership/permissions. I could just delete it and re-create, but if someone can tell me what they have, I'll just fix it Thanks in advance, James
  21. Oh, good idea. Next time there's an update I'll try and report back! Cheers, James
  22. I'm currently on 6.2, but I think I've had it for as long as I can remember being on 6.x J