VladoPortos

Members
  • Posts

    30
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

VladoPortos's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. Latest update broke something, can't play any book and got this error in log: WARN: [LibraryItemController] User attempted to update without permission User EDIT: Issue solved, thanks @advplyr super fast update ! For reference: https://github.com/advplyr/audiobookshelf/issues/473
  2. I love you, thanks so much... the custom location fixed it.
  3. Did something change in latest update related to proxy ? My reverse proxy stop working suddenly. When I go to url of the firefly on my server like https://firefly.server.com it automatically try to do https://firefly.server.com:8080/login why it force the 8080 in there ?
  4. Hi, is a random-generated name for container by docker. It gives it funny names if the name is not specified. If it is stopped (and you are not missing any previous running service) I would delete it. It looks like there was a template error or something.
  5. For the life of me, I can't get VMware VM to boot efi image. I can boot if I change the firmware type to BIOS on the VM ( also need to change netboot.xyz.efi to netboot.xyz.kpxe in Unifi but the damn efi won't work :-/ All I get is "Downloading NBP file..." and then it jumps back to the Boot Managet of the VM ... EDIT 1: Looks like the issue might be Vmware only related... I managed to efi boot my NUC machine just fine... but found a bug ? The docker template have "Webserver for local asset hosting (Default 8080)" while: root@3cd6e4a03848:/config/nginx# cat site-confs/default server { listen 80; location / { root /assets; autoindex on; } } EDIT 2: Also, there seems to be something wrong with the squash images, I have tried two live CDs and none of them loaded the downloaded squash image ( started downloading new one from net ) but initrd and vmlinuz was no problem from local NW... EDIT 3: Well, I'm at the end of my wits, I made Ubuntu-21.10-LXDE local, via web interface. It downloaded OK ( i think ), all 3 files present... again booting the initrd and vmlinuz loads ok... the squashfs not... this is in console: (attached pic)
  6. Ok I will answer my own question. The image absolutely have to be as Network Type: Host ( I had it on its own VLAN ) otherwise for some unknown reason even though we are mounting /proc into the container the file /proc/net/dev is different in container (It does not make sense... ) and you can't even mount it directly, the container will not start... so Host only to have network... :-/ https://github.com/influxdata/telegraf/issues/4505
  7. Anybody had Telegraf not detecting all network interfaces ? I have Network card in bond and some VLANs... in Telegraf I setup inputs.net without parameters to take all (I also tried to specify which interfaces) but in the Grafana I can only see some of them. See the attached pic. But my ip a looks like this: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000 link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1476 qdisc noop state DOWN group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1464 qdisc noop state DOWN group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 10: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq master bond0 state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff 11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq master bond0 state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff permaddr 68:05:ca:3c:9d:53 12: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff 13: bond0.1020@bond0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0.1020 state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff 14: bond0.1030@bond0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0.1030 state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff 15: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/24 scope global br0 valid_lft forever preferred_lft forever 16: br0.1020: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff 17: br0.1030: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff 18: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 10.253.0.1/32 scope global wg0 valid_lft forever preferred_lft forever 19: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:15:79:85:f5 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 21: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:aa:0f:96 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 22: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:aa:0f:96 brd ff:ff:ff:ff:ff:ff I would like at least the bond0 to be monitored (so far I have it via plugin in unraid that generate graphs ok, but not in Grafana...)
  8. I'm not sure what happened, it was working before and i had good experience with Cloudbeery to the point I did bought a personal license for it... but suddenly the mounted disks vanished from source of backup UI. I kind of solved it by mounting my unassigned backup drive to the docker container under /drive this was picked up ok... then I added another path /mnt/user and in container under /drive/user this also worked... so it does look a bit stupid inside container, but it works at least... I might have a second look at duplicacy....
  9. Hello all, I have enabled bonding for my unriad server. The 4th, 802.3ad since this is also supported on my Unifi switch. I'm using: Intel Pro/1000 Intel PRO/1000 PT Dual Port Server Adapter EXPI9402P. I'm fairly certain both are running 1Gbit/s, mostly basing in on this: 10: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq master bond0 state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff 11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq master bond0 state UP group default qlen 1000 link/ether 68:05:ca:3c:9d:52 brd ff:ff:ff:ff:ff:ff permaddr 68:05:ca:3c:9d:53 and https://imgur.com/a/eNMQpB9 So everything is showing 1Gbit... except here in dashboard: https://imgur.com/a/X9YGM7l Not sure if bug, or intentional... plexserver-diagnostics-20211010-1933.zip
  10. Probably my final update, seems like it was the macvlan driver issue for me and not Nvidia. I'm 14 hours in, two cards encoding videos and except these messages I haven't had crash or anything else strange in log. Oct 10 08:54:32 PlexServer kernel: caller _nv000723rm+0x1ad/0x200 [nvidia] mapping multiple BARs Oct 10 09:01:02 PlexServer kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Oct 10 09:01:02 PlexServer kernel: caller _nv000723rm+0x1ad/0x200 [nvidia] mapping multiple BARs Oct 10 09:04:52 PlexServer kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Oct 10 09:04:52 PlexServer kernel: caller _nv000723rm+0x1ad/0x200 [nvidia] mapping multiple BARs Oct 10 09:14:10 PlexServer kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Oct 10 09:14:10 PlexServer kernel: caller _nv000723rm+0x1ad/0x200 [nvidia] mapping multiple BARs And as far as I know, there is nothing to be done about these... as it seems to be related to Bios...(?)
  11. So I did not get crash as before, maybe because I caught it first but seems like it network related at least this was in log: Oct 9 17:25:13 PlexServer kernel: ------------[ cut here ]------------ Oct 9 17:25:13 PlexServer kernel: WARNING: CPU: 4 PID: 78054 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Oct 9 17:25:13 PlexServer kernel: Modules linked in: xt_mark nvidia_uvm(PO) xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_nat xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap macvlan xt_conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs nfsd lockd grace sunrpc md_mod i915 video iosf_mbi i2c_algo_bit intel_gtt nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops nvidia(PO) drm backlight agpgart it87 hwmon_vid iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables wmi_bmof mxm_wmi edac_mce_amd amd_energy kvm_amd btusb btrtl btbcm kvm btintel bluetooth crct10dif_pclmul crc32_pclmul crc32c_intel mpt3sas ghash_clmulni_intel aesni_intel crypto_simd Oct 9 17:25:13 PlexServer kernel: ecdh_generic ecc cryptd nvme i2c_piix4 glue_helper atlantic ahci nvme_core libahci raid_class scsi_transport_sas i2c_core rapl ccp wmi k10temp thermal button acpi_cpufreq Oct 9 17:25:13 PlexServer kernel: CPU: 4 PID: 78054 Comm: kworker/4:0 Tainted: P O 5.10.28-Unraid #1 Oct 9 17:25:13 PlexServer kernel: Hardware name: Gigabyte Technology Co., Ltd. TRX40 AORUS MASTER/TRX40 AORUS MASTER, BIOS F5q 04/12/2021 Oct 9 17:25:13 PlexServer kernel: Workqueue: events macvlan_process_broadcast [macvlan] Oct 9 17:25:13 PlexServer kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Oct 9 17:25:13 PlexServer kernel: Code: e8 dc f8 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 36 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 6d f3 ff ff e8 35 f5 ff ff e9 22 01 Oct 9 17:25:13 PlexServer kernel: RSP: 0018:ffffc9000037cdd8 EFLAGS: 00010202 Oct 9 17:25:13 PlexServer kernel: RAX: 0000000000000188 RBX: 000000000000d3ca RCX: 00000000fe806454 Oct 9 17:25:13 PlexServer kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffa00dc128 Oct 9 17:25:13 PlexServer kernel: RBP: ffff888de1169900 R08: 00000000c01ab324 R09: ffff8883ef80aba0 Oct 9 17:25:13 PlexServer kernel: R10: 0000000000000098 R11: ffff8882e0cea100 R12: 0000000000001cca Oct 9 17:25:13 PlexServer kernel: R13: ffffffff8210b440 R14: 000000000000d3ca R15: 0000000000000000 Oct 9 17:25:13 PlexServer kernel: FS: 0000000000000000(0000) GS:ffff88903d100000(0000) knlGS:0000000000000000 Oct 9 17:25:13 PlexServer kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Oct 9 17:25:13 PlexServer kernel: CR2: 0000000000527fc8 CR3: 00000001645d2000 CR4: 0000000000350ee0 Oct 9 17:25:13 PlexServer kernel: Call Trace: Oct 9 17:25:13 PlexServer kernel: <IRQ> Oct 9 17:25:13 PlexServer kernel: nf_conntrack_confirm+0x2f/0x36 [nf_conntrack] Oct 9 17:25:13 PlexServer kernel: nf_hook_slow+0x39/0x8e Oct 9 17:25:13 PlexServer kernel: nf_hook.constprop.0+0xb1/0xd8 Oct 9 17:25:13 PlexServer kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe Oct 9 17:25:13 PlexServer kernel: ip_local_deliver+0x49/0x75 Oct 9 17:25:13 PlexServer kernel: __netif_receive_skb_one_core+0x74/0x95 Oct 9 17:25:13 PlexServer kernel: process_backlog+0xa3/0x13b Oct 9 17:25:13 PlexServer kernel: net_rx_action+0xf4/0x29d Oct 9 17:25:13 PlexServer kernel: __do_softirq+0xc4/0x1c2 Oct 9 17:25:13 PlexServer kernel: asm_call_irq_on_stack+0x12/0x20 Oct 9 17:25:13 PlexServer kernel: </IRQ> Oct 9 17:25:13 PlexServer kernel: do_softirq_own_stack+0x2c/0x39 Oct 9 17:25:13 PlexServer kernel: do_softirq+0x3a/0x44 Oct 9 17:25:13 PlexServer kernel: netif_rx_ni+0x1c/0x22 Oct 9 17:25:13 PlexServer kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Oct 9 17:25:13 PlexServer kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Oct 9 17:25:13 PlexServer kernel: process_one_work+0x13c/0x1d5 Oct 9 17:25:13 PlexServer kernel: worker_thread+0x18b/0x22f Oct 9 17:25:13 PlexServer kernel: ? process_scheduled_works+0x27/0x27 Oct 9 17:25:13 PlexServer kernel: kthread+0xe5/0xea Oct 9 17:25:13 PlexServer kernel: ? __kthread_bind_mask+0x57/0x57 Oct 9 17:25:13 PlexServer kernel: ret_from_fork+0x22/0x30 Oct 9 17:25:13 PlexServer kernel: ---[ end trace 33448d9f3f916301 ]--- So I disabled both NW interfaces in BIOS, added Intel Pro/1000 dual port PCI card that was working for me in another server (not unraid) for 2 years without hiccup... and another round of testing is a go...
  12. Unfortunately I can't 100% confirm it's the encoder, it's my best guess. But there was also issue with network (switch was kind of broken) and after removing it there was no crash yet. But I run only one card. For sake of testing I just turned both of the cards to encode and will report if it crashes or not. So far the temps are like this: GTX 960 - cca 60 Degree, GTX 980 - cca 70 Degrees
  13. Hello all, I'm experiencing hard crashes after some time when one or both Nvidia cards are encoding for some time (tdarr), did somebody experienced crashes with similar message as me here ?