-
Posts
250 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by Fiservedpi
-
Yes the battery is just for the Unraid server I've got other UPS for the "network", the network remains up
-
I've had a couple power issue recently and I've not received any UPS Alerts, I've got email and Pushbullet set up and tested to be working. Is there anything I might be missing? In order to receive an alert when the server goes on Battery.
-
Did you ever figure out what the "mixed HW and IP" Warning was about? I'm getting the same thing
-
Has this been resolved? Didn't have much luck with the Google machine. ```Tower kernel: igb 5 eth0: mixed HW and IP checksum settings```
-
i totally biffed it, old router ip range 192.168.86.0/24 new roter ip range 192.168.1.0/24, So gues what i did? went through every device manually and resetup rather than just editing the new ip range so everything got new ip's, needless to say its been about 2weeks and im almost at 100% again. Lesson Learned
-
I'll be getting a new router tomorrow and I want to make it as painless as possible, in order for my Unraid server to get the same IP address as the old router do I have have to configure a static IP in the new router? and fwd the same ports? Or is there anything else I should be aware of?
-
[Support] Plex-Discord Role Management Docker
Fiservedpi replied to CyaOnDaNet's topic in Docker Containers
So I borked my whole docker setup today and I'm just getting this bot back up and running but I'm having trouble with the Sopranos is not "airing" but I'm personally still downloading my purchased content how can I get notified of new episodes added to my server? I tried the notifications include show but the Sopranos doesn't show up anywhere in the bot what should I do to make the bot discover it, P.S the show is visible in both sonarr and tautulli -
[6.8-RC1] ECC error with ryzen 3700x and ECC ram
Fiservedpi commented on trott's report in Prereleases
You can stop this by adding blacklist amd64_edac_mod To the /lib/modprobe.d and /etc/modprobe.d/ create a new .conf, I named it amd64_edac_mod.conf then within that file place this blacklist amd64_edac_mod Source -
Thank you @Squid
-
If you go to the beta can you go back to stable simply? I'm getting some crazy errors when trying to go back to stable. And my Cachē drive unmounted/unassigned itself ```Linux 5.7.2-Unraid. root@Tower:~# tail -f /var/log/syslog Jun 27 15:15:19 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:19 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:19 Tower kernel: veth3f1bd44: renamed from eth0 Jun 27 15:15:19 Tower kernel: br-341c40ac3595: port 1(vethde87652) entered disabled state : SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:23 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:23 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:23 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:23 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:23 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:23 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:23 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:23 Tower kernel: docker0: port 1(veth48c12f9) entered disabled state Jun 27 15:15:23 Tower kernel: device veth48c12f9 left promiscuous mode Jun 27 15:15:23 Tower kernel: docker0: port 1(veth48c12f9) entered disabled state Jun 27 15:15:24 Tower kernel: br-341c40ac3595: port 1(vethde87652) entered disabled state Jun 27 15:15:24 Tower kernel: device vethde87652 left promiscuous mode Jun 27 15:15:24 Tower kernel: br-341c40ac3595: port 1(vethde87652) entered disabled state Jun 27 15:15:24 Tower kernel: docker0: port 2(vethd184548) entered disabled state Jun 27 15:15:24 Tower kernel: device vethd184548 left promiscuous mode Jun 27 15:15:24 Tower kernel: docker0: port 2(vethd184548) entered disabled state Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:24 Tower kernel: vethd492f19: renamed from eth0 Jun 27 15:15:24 Tower kernel: device br0 left promiscuous mode Jun 27 15:15:24 Tower kernel: veth94e50e2: renamed from eth0 Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:24 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:27 Tower kernel: vethb5852b4: renamed from eth0 Jun 27 15:15:27 Tower kernel: docker0: port 6(vethcbd2e42) entered disabled state Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:27 Tower kernel: veth2262833: renamed from eth0 Jun 27 15:15:27 Tower kernel: docker0: port 3(vethb2d76f2) entered disabled state Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:27 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:29 Tower kernel: veth20b9485: renamed from eth0 Jun 27 15:15:29 Tower kernel: docker0: port 5(vethaabdc05) entered disabled state Jun 27 15:15:29 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:29 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:29 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:29 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:29 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:15:29 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:15:37 Tower kernel: docker0: port 6(vethcbd2e42) entered disabled state Jun 27 15:15:37 Tower kernel: device vethcbd2e42 left promiscuous mode Jun 27 15:15:37 Tower kernel: docker0: port 6(vethcbd2e42) entered disabled state Jun 27 15:15:37 Tower kernel: docker0: port 3(vethb2d76f2) entered disabled state Jun 27 15:15:37 Tower kernel: device vethb2d76f2 left promiscuous mode Jun 27 15:15:37 Tower kernel: docker0: port 3(vethb2d76f2) port ........................................................................................................................................................................................................................................................................ :14 Tower root: error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory Jun 27 15:18:14 Tower root: cat: /var/run/libvirt/libvirtd.pid: No such file or directory Jun 27 15:18:14 Tower root: kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] Jun 27 15:18:16 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:18:16 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:18:16 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:18:16 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:18:16 Tower kernel: SQUASHFS error: Unable to read fragment cache entry [1602b7c] Jun 27 15:18:16 Tower kernel: SQUASHFS error: Unable to read page, block 1602b7c, size 3a4c Jun 27 15:18:17 Tower root: /etc/rc.d/rc.libvirt: line 151: 7185 Bus error /sbin/modprobe -ra $MODULE $MODULES 2> /dev/null Jun 27 15:18:17 Tower root: virtlogd is not running... cache entry [1602b7c] Jun 27 15:18:17 Tow Jun 27 15:18:18 Tower avahi-dnsconfd[16049]: read(): EOF```
-
[Support] Plex-Discord Role Management Docker
Fiservedpi replied to CyaOnDaNet's topic in Docker Containers
How do I remove/unsubscribe from the @watching role? Like I don't want to be notified when someone starts playback EDIT* nvm I got it !unlink {@user} -
[Support] Plex-Discord Role Management Docker
Fiservedpi replied to CyaOnDaNet's topic in Docker Containers
Oh ok I thought I double enrolled myself or something -
[Support] Plex-Discord Role Management Docker
Fiservedpi replied to CyaOnDaNet's topic in Docker Containers
-
[Support] Plex-Discord Role Management Docker
Fiservedpi replied to CyaOnDaNet's topic in Docker Containers
Yep I've implemented it thanks for the swift responses!✊ -
[Support] Plex-Discord Role Management Docker
Fiservedpi replied to CyaOnDaNet's topic in Docker Containers
anyone else's container randomly stop? I always have to keep an eye on it is this a bug? -
Just make sure your appdata folder is safe off server somewhere just in case, you don't even need to worry about drive assignment since your not Channing your mobo, but yeah super simple
-
Everytime i pull down a docker image my docker.log has this error and the container doesn't update, can anyone assist in resolving this 2020-04-16 11:14:27.268132 I | http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11) 2020-04-16 23:00:55.731317 W | wal: sync duration of 6.014929115s, expected less than 1s 2020-04-17 11:00:50.285414 W | wal: sync duration of 1.100772692s, expected less than 1s 2020-04-18 11:00:52.343940 W | wal: sync duration of 3.136297497s, expected less than 1s docker.log
-
[Plugin] Linuxserver.io - Unraid Nvidia
Fiservedpi replied to linuxserver.io's topic in Plugin Support
Same here even when the GPU Stats plugin removed logs slammed with CMD: > /var/log/syslog to truncate it for now, since the log was 422,000 Bytes There's something going on here, on a kernel level I think nvidia-container-runtime-hook.log -
My goodness that was painless, literally 5 min once new hardware was ready
-
docker very slow and makes webui unresponsive
Fiservedpi replied to wubboz's topic in General Support
Ditto -
Ok great thanks so stop all vms unassign my gpu for the VM just to be safe and the rest should just work
-
Thanks @Squid so pretty much just move the discs over and Unraid will Handel it, what about cahè Unraid Will Handel that too?
-
Doing this next week just to confirm make sure the drivers are in the same sata ports on the new mobo? Like SDB, SDC, SDD ETC..? AND make sure parity which is SDE goes on SDE again?
-
When I pull down/update a container it takes at least 5-8 min to complete, 90% of the time it works 10% it stalls out and doesn't finish. 1. Anyway I can see what's happening (/var/lo/docker.log) 2. This problem is recent it never used to be like this maybe 2-3 weeks ago it started. 3. Could this be related to worldwide bandwidth cap? here is /var/logdocker/log 2020-04-04 22:35:19.916441 W | wal: sync duration of 1.230827326s, expected less than 1s time="2020-04-04T22:35:19.916520110-04:00" level=error msg="error creating cluster object" error="name conflicts with an existing object" module=node node.id=pkh71xjdu81eie6xiprf7xjnc 2020-04-04 22:35:22.038938 W | wal: sync duration of 1.356385242s, expected less than 1s time="2020-04-04T22:35:22.551929134-04:00" level=error msg="agent: session failed" backoff=100ms error="session initiation timed out" module=node/agent node.id=pkh71xjdu81eie6xiprf7xjnc time="2020-04-04T22:35:23.897859341-04:00" level=error msg="error reading the kernel parameter net.ipv4.vs.expire_nodest_conn" error="open /proc/sys/net/ipv4/vs/expire_nodest_conn: no such file or directory" 2020-04-05 22:35:23.408243 W | wal: sync duration of 3.460660432s, expected less than 1s time="2020-04-06T00:04:34.848541359-04:00" level=error msg="stream copy error: reading from a closed fifo" time="2020-04-06T00:04:34.848555923-04:00" level=error msg="stream copy error: reading from a closed fifo" time="2020-04-06T00:25:51.051354158-04:00" level=error msg="stream copy error: reading from a closed fifo" time="2020-04-06T00:25:51.051379921-04:00" level=error msg="stream copy error: reading from a closed fifo" time="2020-04-06T10:14:24.247694905-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-06T21:16:28.333742527-04:00" level=error msg="Not continuing with pull after error: context canceled" 2020-04-06 22:35:22.012352 W | wal: sync duration of 2.061016111s, expected less than 1s time="2020-04-08T02:12:56.452449600-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:12:56.452623038-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:12:56.452650813-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:12:56.452628502-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:15:45.686901979-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:15:45.687041547-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:15:45.687100009-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:15:45.687506259-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:18:10.337973256-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:18:10.337974329-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:21:47.026458987-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:21:47.026558447-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T02:33:19.360768836-04:00" level=error msg="Handler for POST /v1.37/containers/48b66735f418/restart returned error: Cannot restart container 48b66735f418: container is marked for removal and cannot be started" time="2020-04-08T11:38:59.223867708-04:00" level=error msg="Not continuing with pull after error: context canceled" time="2020-04-08T11:43:26.171070927-04:00" level=error msg="4996f60ab5ffdb18983a61f61369b4e3d8287348ed513053407dcd43addb5914 cleanup: failed to delete container from containerd: no such container" time="2020-04-08T11:43:26.171101471-04:00" level=error msg="Handler for POST /v1.40/containers/4996f60ab5ffdb18983a61f61369b4e3d8287348ed513053407dcd43addb5914/start returned error: driver failed programming external connectivity on endpoint Virt-Manager (d7e04025416c94683db0208675e3e22efe9e1c6fdd04e8ec38b9cb11b9d528ac): Bind for 0.0.0.0:8080 failed: port is already allocated" time="2020-04-08T12:26:57.292769913-04:00" level=error msg="54dae4a9f418e55668fd95eeaeb45748f12a9202b335fe1f6eec0a2724de8687 cleanup: failed to delete container from containerd: no such container" time="2020-04-08T12:26:57.292801322-04:00" level=error msg="Handler for POST /v1.40/containers/54dae4a9f418e55668fd95eeaeb45748f12a9202b335fe1f6eec0a2724de8687/start returned error: driver failed programming external connectivity on endpoint Virt-Manager (6fa2a8fa99163a288330f96916519098d921dca7bce21a06f71baf56a115ef18): Bind for 0.0.0.0:8080 failed: port is already allocated" time="2020-04-08T12:31:10.565678728-04:00" level=error msg="6bd0474f3e836f9b344a85d362751cd57acdd7bef1c61aedcea90b328bdfc788 cleanup: failed to delete container from containerd: no such container" time="2020-04-08T12:31:10.565707694-04:00" level=error msg="Handler for POST /v1.37/containers/6bd0474f3e83/start returned error: driver failed programming external connectivity on endpoint Virt-Manager (4b08b4a37e6477379145667eb6fa118656f88855e462f26f7e1bf8f831b9119e): Bind for 0.0.0.0:8080 failed: port is already allocated" time="2020-04-08T12:32:04.733791889-04:00" level=error msg="6bd0474f3e836f9b344a85d362751cd57acdd7bef1c61aedcea90b328bdfc788 cleanup: failed to delete container from containerd: no such container" time="2020-04-08T12:32:04.733819158-04:00" level=error msg="Handler for POST /v1.37/containers/6bd0474f3e83/start returned error: driver failed programming external connectivity on endpoint Virt-Manager (0af2bce04b30593516dee481e41537496f5037eaa04b2a651df9caab3cba02bd): Bind for 0.0.0.0:8080 failed: port is already allocated" 2020-04-08 22:35:24.519657 W | wal: sync duration of 4.555785529s, expected less than 1s tower-diagnostics-20200408-1955.zip
-
Any resolution to this? Seems like since 6.8.3 my spin down groups aren't working