Jump to content

TexasDave

Members
  • Content Count

    157
  • Joined

  • Last visited

Everything posted by TexasDave

  1. I trying to get rid of the "ComicRN: Requests module not found on system. I'll revert so this will work, but you probably should install" message by doing the "pip install requests" that has been suggested but get the following. What is the best way to upgrade python (if needed) and if requests is already installed, ideas why I get the error message? Thanks! root@Zack-unRAID:~# docker exec -it mylar bash root@2269a8929bc0:/# pip install requests DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Requirement already satisfied: requests in /usr/lib/python2.7/site-packages (2.22.0) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python2.7/site-packages (from requests) (3.0.4) Requirement already satisfied: idna<2.9,>=2.5 in /usr/lib/python2.7/site-packages (from requests) (2.8) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python2.7/site-packages (from requests) (1.25.3) Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python2.7/site-packages (from requests) (2019.3.9)
  2. @saarg - thanks for your comment above. I had a really dumb mistake in my setup and I would not have noticed it without your help. Thanks! I will correct my post above and I attach a version of my "ubooquity.subdomain.conf" file as it may be useful for others. I do not think one comes with the LetsEncrypt docker from the fine folks at LinuxServer? @CHBMB - hope you are well! Is there a conf file for Ubooquity with the Let's Encrypt docker? Last I looked there was not. It was easy enough to create my own (attached) but would be great to get someone who knows what they are doing to add this to the conf library? ubooquity.subdomain.conf
  3. I just had to swap a drive on unRAID and am seeing a weird issue with Plex. I am running the latest unRAID and PLEX docker. I typically use a Shield TV Client but am seeing the issue on web browsers and Plex Media player too. A small number of TV shows will no longer play - when I start them - they just spin - nothing plays. The same shows play fine on KODI. Any ideas on what could be wrong or how I can debug to narrow down the issue? Thanks!
  4. First, big thanks for this docker, I find it super useful (as do my kids). 😀 In case anyone is looking for a solid Android reader for ubooquity, KUBOO is nice. I am using it with LetsEncrypt/DuckDNS so I can use it outside the house (and so my son up at university can use it). Just make sure you open port 2202 on your router, that got me at first. WRONG!! It is also easy to test your feed by just pointing your browser to the address and you should, if all is working, see the XML feed.: This works in your browser (https://XXXX.duckdns.org/) This is what you put in KUBOO (https://XXXX.duckdns.org/opds-comics/) (you can put this in your browser and you will see XML) Replace XXXX with your duckdns subdoman Thanks!!
  5. @johnnie.black - Many thanks (again). "Even a blind pig occasionally finds an acorn...." My current setup matches what you have suggested above so I have less work to do than I feared. Not sure if I did this on purpose when I built the system or if I just got lucky. On number (2) - that is cool and good to know. Thanks!!
  6. I have a disk that is dying and am in the process of replacing it (new drive installed, preclearing, etc.). All is good. 😀 But I am going to do a "clean and tidy" while I am in there For the most part, I did a decent job in cable management when I built my system. I am not the most dexterous and struggle working in tight spaces...😋 Two questions: (1) I am trying to optimize (if possible) what ports my drives are plugged into. I am on a SuperMicro X10SL7-F (I have attached the schematic). The MOB has the following ports: o 2x (Intel PCH) Serial ATA (SATA 3.0) Ports(6Gb/sec) o 4x (Intel PCH) Serial ATA (SATA 2.0) Ports (3Gb/sec) o 8x SAS Connectors (supported by the LSI 2308 SAS controller) I have the two cache drives (Samsung 850 EVO 500GB ), one parity drive (WD Red 6 TB) and multiple data drives (a combo of WD 3TB and 6TB). Are there "preferred" or "optimail" ports I should be using for cache, parity and data drives for my MOB and unRAID? (2) Assuming I decide to move the drives and plug them into different ports, is this ok? I assume unRAID will just see a valid drive has been moved to a new port but all is ok? I hope this point make sense. For example, what if I unplug a drive from a 6Gb port and move it to a 3Gb port. Thanks for any guidance and suggestions... Quick Reference X10SL7-F.pdf
  7. @johnnie.black Many thanks for this. Is there a good set of docs or web page where I can learn more about the various errors and which ones I should worry about? Would like to learn this and get a bit more self sufficient. Also - I do regular parity checks (one a week) but in looking around on this error, it seems most folks do once a month? Is once a month enough? Thanks!!
  8. @wgstarks Got it. Like I said - senior moment. My cover photo has the same color and has text so it was harf for my near sighted eyes to see it there. 😛 Even when I looked again I missed it. Many thanks for your help!
  9. That is what I was thinking... Can you comment on the other drives as my initial install had three of these put in from the start so if this one is getting close, the others must be too? Thanks!
  10. Happy 2019 to all the "unRAIDERs" out there... I was alerted to some disk errors (thanks to "Fix Common Problems") and wanted to see how bad this is. One of my disks has errors - see below. I do not know enough about hardware to know if this is just normal wear and tear or not? And more importantly, should I start to worry? Extended SMART test says all ok and I do regular parity checks and those are fine too. I also attach my diagnostics. Should I start thinking about replacing this drive and others? Thanks! --- ATA Error Count: 7 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 7 occurred at disk power-on lifetime: 36305 hours (1512 days + 17 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 53 00 58 9a 0e 01 Error: UNC at LBA = 0x010e9a58 = 17734232 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 00 80 78 96 0e e1 00 38d+22:51:24.116 READ DMA EXT 25 00 40 38 91 0e e1 00 38d+22:51:24.108 READ DMA EXT 35 00 40 b8 8a 0e e1 00 38d+22:51:24.104 WRITE DMA EXT 25 00 40 f8 8f 0e e1 00 38d+22:51:24.080 READ DMA EXT 47 00 01 30 06 00 a0 00 38d+22:51:24.079 READ LOG DMA EXT Error 6 occurred at disk power-on lifetime: 36305 hours (1512 days + 17 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 53 00 98 8d 0e 01 Error: UNC at LBA = 0x010e8d98 = 17730968 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 00 40 b8 8a 0e e1 00 38d+22:51:20.068 READ DMA EXT 35 00 80 f8 7f 0e e1 00 38d+22:51:20.065 WRITE DMA EXT 25 00 00 b8 86 0e e1 00 38d+22:51:20.063 READ DMA EXT 25 00 40 78 81 0e e1 00 38d+22:51:20.042 READ DMA EXT 47 00 01 30 06 00 a0 00 38d+22:51:20.040 READ LOG DMA EXT Error 5 occurred at disk power-on lifetime: 36305 hours (1512 days + 17 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 53 00 d0 80 0e 01 Error: UNC at LBA = 0x010e80d0 = 17727696 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 00 80 f8 7f 0e e1 00 38d+22:51:15.991 READ DMA EXT 25 00 40 b8 7a 0e e1 00 38d+22:51:15.984 READ DMA EXT 35 00 40 00 74 0e e1 00 38d+22:51:15.979 WRITE DMA EXT 25 00 78 40 79 0e e1 00 38d+22:51:15.961 READ DMA EXT 47 00 01 30 06 00 a0 00 38d+22:51:15.960 READ LOG DMA EXT Error 4 occurred at disk power-on lifetime: 36305 hours (1512 days + 17 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 53 00 10 74 0e 01 Error: UNC at LBA = 0x010e7410 = 17724432 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 00 40 00 74 0e e1 00 38d+22:51:12.235 READ DMA EXT c8 00 08 f8 73 0e e1 00 38d+22:51:12.233 READ DMA 25 00 40 78 69 0e e1 00 38d+22:51:12.226 READ DMA EXT 25 00 78 00 68 0e e1 00 38d+22:51:11.826 READ DMA EXT 25 00 40 80 51 0e e1 00 38d+22:51:11.229 READ DMA EXT Error 3 occurred at disk power-on lifetime: 36305 hours (1512 days + 17 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 53 00 e8 4c 0e 01 Error: UNC at LBA = 0x010e4ce8 = 17714408 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 25 00 08 50 49 0e e1 00 38d+22:51:06.731 READ DMA EXT 35 00 d0 80 45 0e e1 00 38d+22:51:06.728 WRITE DMA EXT 35 00 28 50 3d 0e e1 00 38d+22:51:06.726 WRITE DMA EXT 47 00 01 30 06 00 a0 00 38d+22:51:06.725 READ LOG DMA EXT 47 00 01 30 00 00 a0 00 38d+22:51:06.724 READ LOG DMA EXT zack-unraid-diagnostics-20190109-1831.zip
  11. Hmmmm....I cannot seem to find it on my laptop. I can change my avatar (on both laptop and mobile) but do not see how to do the "cover banner"? I di dit once so I know it is there. Sorry - I am sure it is obvious.
  12. At some point, I added a "cover banner" to my forum profile. I am having a "senior moment" and cannot seem to find out how to change it to something new. Can someone advise? Thanks and Happy 2019!
  13. @Cessquill, @BRiT, @DZMM, thanks for your help! I will give it a shot and your advice is very helpful...
  14. I am currently considering setting up "Plex Live TV". I am running the Linux|Server Plex Docker (and as many LS dockers as possible). I know I need to get a tuner and install unRAID DVB. Anything else? Is there a known tuner that "plays nice" with unRAID, Plex Docker and unRAID DVB? The supported tuners are here: https://support.plex.tv/articles/225877427-supported-dvr-tuners-and-antennas/ I am in the UK so will want FreeView (I think). If there is a guide to setting up "Plex Live TV" for unRAID, I would be grateful for a pointer. Many thanks!
  15. @Djoss - all good. The "random string" backup set was actually all my photos. Just renamed it "Photos". All good on my side now - many thanks!!
  16. @Djoss - no, I did not. Will do that. Was is the first set in the above screen shot. That is an "additional" set that I did not setup...
  17. Thanks for making this docker. I had previously been on the "other" docker and had migrated to Pro. All has been fine but I am finally moving to this docker as it is supported. I have installed the docker and followed the instructions to "Take over exisiting device.". I think all has gone well but I am a bit lost as what to do next? I do not care about versions. Do I: Run backups? Or delete the "ddfe02...." entry and run backups? See screenshot. Sorry to be dense but I think all is good and I just want to do the right thing without having to upload my files again. Thanks!
  18. That would work 🙂 We have some iPads and iPhones. But mostly PC and Android. How will that affect the iPhone and iPad? No one on those devices use unRAID so assume it is not an issue. What is the purpose of that deamon? EDIT: Stopped being lazy https://linux.die.net/man/8/avahi-daemon Still curious why this happens.... Thanks!
  19. I have been looking at this issue off and on for several months now. There are two fixes / issues that can cause this. See these threads: CAN NOT WRITE TO ARRAY - INTERMITTENTLY (SOLVED) LOG FILLING WITH "HOST NAME CONFLICT" MESSAGES I had the first issues and corrected it (Shrink the IP range owned by the DHCP server.). I monitored my logs and all seemed well. Then a few weeks later I noticed the issue was back. As it is not a critical issue, I have been messing with it off and on. I believe one of my dockers is triggering this issue in my setup. I attach my logs and diagnostics. The interesting bit is below (I think). I rebooted my server and things ran fine for several days as I had all dockers off. I then turned on Plex (as this is the one I thought might be triggering the issue (but we wanted to watch a movie)) but nothing happened in the log. All was still good. But when the auto-updater ran, and restarted my dockers, the issue manifested itself again. Just curious if anyone can help me narrow this down. Many thanks! Aug 30 09:36:39 Zack-unRAID kernel: microcode: microcode updated early to revision 0x24, date = 2018-01-21 Aug 30 09:36:39 Zack-unRAID kernel: Linux version 4.14.35-unRAID (root@develop64) (gcc version 7.3.0 (GCC)) #1 SMP PREEMPT Thu Apr 19 14:06:21 PDT 2018 Aug 30 09:36:39 Zack-unRAID kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot nomodeset Aug 30 09:36:39 Zack-unRAID kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 30 09:36:39 Zack-unRAID kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 30 09:36:39 Zack-unRAID kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 30 09:36:39 Zack-unRAID kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 30 09:36:39 Zack-unRAID kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 30 09:36:39 Zack-unRAID kernel: e820: BIOS-provided physical RAM map: Aug 30 09:36:39 Zack-unRAID kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000098bff] usable Aug 30 09:36:39 Zack-unRAID kernel: BIOS-e820: [mem 0x0000000000098c00-0x000000000009ffff] reserved Aug 30 09:36:39 Zack-unRAID kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Aug 30 09:36:39 Zack-unRAID kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000cd536fff] usable Aug 30 09:36:39 Zack-unRAID kernel: BIOS-e820: [mem 0x00000000cd537000-0x00000000cd53dfff] ACPI NVS . . . . . . Aug 30 12:35:18 Zack-unRAID kernel: IPv6: ADDRCONF(NETDEV_UP): veth1119267: link is not ready Aug 30 12:35:18 Zack-unRAID kernel: docker0: port 13(veth1119267) entered blocking state Aug 30 12:35:18 Zack-unRAID kernel: docker0: port 13(veth1119267) entered forwarding state Aug 30 12:35:18 Zack-unRAID kernel: docker0: port 13(veth1119267) entered disabled state Aug 30 12:35:18 Zack-unRAID kernel: eth0: renamed from veth6f8d175 Aug 30 12:35:18 Zack-unRAID kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1119267: link becomes ready Aug 30 12:35:18 Zack-unRAID kernel: docker0: port 13(veth1119267) entered blocking state Aug 30 12:35:18 Zack-unRAID kernel: docker0: port 13(veth1119267) entered forwarding state Aug 30 12:35:20 Zack-unRAID avahi-daemon[7109]: Joining mDNS multicast group on interface veth1119267.IPv6 with address fe80::ccc8:7bff:fe12:a8c1. Aug 30 12:35:20 Zack-unRAID avahi-daemon[7109]: New relevant interface veth1119267.IPv6 for mDNS. Aug 30 12:35:20 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::ccc8:7bff:fe12:a8c1 on veth1119267.*. Aug 30 14:46:09 Zack-unRAID kernel: mdcmd (42): spindown 0 Aug 30 16:26:15 Zack-unRAID kernel: mdcmd (43): spindown 2 Aug 30 16:26:17 Zack-unRAID kernel: mdcmd (44): spindown 3 Aug 30 16:26:17 Zack-unRAID kernel: mdcmd (45): spindown 4 Aug 30 16:26:17 Zack-unRAID kernel: mdcmd (46): spindown 5 Aug 30 20:41:44 Zack-unRAID kernel: mdcmd (47): spindown 0 Aug 30 20:41:45 Zack-unRAID kernel: mdcmd (48): spindown 3 Aug 30 20:41:45 Zack-unRAID kernel: mdcmd (49): spindown 4 Aug 30 21:00:17 Zack-unRAID kernel: mdcmd (50): spindown 1 Aug 30 22:20:57 Zack-unRAID kernel: mdcmd (51): spindown 5 Aug 30 23:32:56 Zack-unRAID kernel: mdcmd (52): spindown 2 Aug 30 23:36:36 Zack-unRAID kernel: mdcmd (53): spindown 1 Aug 31 00:00:01 Zack-unRAID Docker Auto Update: Community Applications Docker Autoupdate running Aug 31 00:00:01 Zack-unRAID Plugin Auto Update: Checking for available plugin updates Aug 31 00:00:01 Zack-unRAID Docker Auto Update: Checking for available updates Aug 31 00:00:05 Zack-unRAID Plugin Auto Update: dynamix.system.stats.plg version 2018.08.29a does not meet age requirements to update Aug 31 00:00:05 Zack-unRAID Plugin Auto Update: Community Applications Plugin Auto Update finished Aug 31 00:00:19 Zack-unRAID sSMTP[18184]: Creating SSL connection to host Aug 31 00:00:19 Zack-unRAID sSMTP[18184]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Aug 31 00:00:21 Zack-unRAID sSMTP[18184]: Sent mail for ra2258@gmail.com (221 2.0.0 closing connection q21-v6sm3543595wmq.3 - gsmtp) uid=0 username=root outbytes=693 Aug 31 00:00:22 Zack-unRAID Docker Auto Update: Stopping CalibreWeb Aug 31 00:00:22 Zack-unRAID Docker Auto Update: docker stop -t 10 CalibreWeb Aug 31 00:00:32 Zack-unRAID kernel: docker0: port 1(veth14d836d) entered disabled state Aug 31 00:00:32 Zack-unRAID kernel: vethafdcedb: renamed from eth0 Aug 31 00:00:32 Zack-unRAID avahi-daemon[7109]: Interface veth14d836d.IPv6 no longer relevant for mDNS. Aug 31 00:00:32 Zack-unRAID avahi-daemon[7109]: Leaving mDNS multicast group on interface veth14d836d.IPv6 with address fe80::3c8c:cff:fe3b:e5cc. Aug 31 00:00:32 Zack-unRAID kernel: docker0: port 1(veth14d836d) entered disabled state Aug 31 00:00:32 Zack-unRAID kernel: device veth14d836d left promiscuous mode Aug 31 00:00:32 Zack-unRAID kernel: docker0: port 1(veth14d836d) entered disabled state Aug 31 00:00:32 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::3c8c:cff:fe3b:e5cc on veth14d836d. Aug 31 00:00:32 Zack-unRAID Docker Auto Update: Installing Updates for CalibreWeb Aug 31 00:02:09 Zack-unRAID Docker Auto Update: Restarting CalibreWeb Aug 31 00:02:09 Zack-unRAID kernel: docker0: port 1(veth069ef4d) entered blocking state Aug 31 00:02:09 Zack-unRAID kernel: docker0: port 1(veth069ef4d) entered disabled state Aug 31 00:02:09 Zack-unRAID kernel: device veth069ef4d entered promiscuous mode Aug 31 00:02:09 Zack-unRAID kernel: IPv6: ADDRCONF(NETDEV_UP): veth069ef4d: link is not ready Aug 31 00:02:09 Zack-unRAID kernel: eth0: renamed from vetha5a479d Aug 31 00:02:09 Zack-unRAID kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth069ef4d: link becomes ready Aug 31 00:02:09 Zack-unRAID kernel: docker0: port 1(veth069ef4d) entered blocking state Aug 31 00:02:09 Zack-unRAID kernel: docker0: port 1(veth069ef4d) entered forwarding state Aug 31 00:02:09 Zack-unRAID sSMTP[18651]: Creating SSL connection to host Aug 31 00:02:09 Zack-unRAID sSMTP[18651]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Joining mDNS multicast group on interface veth069ef4d.IPv6 with address fe80::a013:afff:fe71:1fc5. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: New relevant interface veth069ef4d.IPv6 for mDNS. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::a013:afff:fe71:1fc5 on veth069ef4d.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::ccc8:7bff:fe12:a8c1 on veth1119267. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::64c4:5fff:fe8c:2ea2 on vethd2d6e3a. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::60e0:42ff:fe7b:80dd on vethf61ec24. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::7868:58ff:febb:d786 on vetha52000d. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::2884:93ff:fe00:83e6 on veth75d2346. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::1859:fff:fefc:b7c1 on veth3e43557. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::6c63:eaff:fe7c:2782 on veth685b816. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::30c2:abff:feb4:1491 on veth5e4de85. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::8c9e:d4ff:feb5:5a4d on veth5e3645e. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::a8ad:abff:fe0a:537b on veth832ce0f. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::f81a:a5ff:fe8b:4745 on vethd949b2a. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::a411:aeff:fe19:7cfa on veth2bab9f2. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for 192.168.122.1 on virbr0. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::42:4eff:fe01:7855 on docker0. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for 172.17.0.1 on docker0. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::587d:17ff:fe4d:7827 on br1. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::ac56:eff:fe8a:f6a6 on br0. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for 192.168.1.115 on br0. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Withdrawing address record for fe80::ec4:7aff:fe32:b928 on eth0. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Host name conflict, retrying with Zack-unRAID-2 Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::a013:afff:fe71:1fc5 on veth069ef4d.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::ccc8:7bff:fe12:a8c1 on veth1119267.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::64c4:5fff:fe8c:2ea2 on vethd2d6e3a.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::60e0:42ff:fe7b:80dd on vethf61ec24.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::7868:58ff:febb:d786 on vetha52000d.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::2884:93ff:fe00:83e6 on veth75d2346.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::1859:fff:fefc:b7c1 on veth3e43557.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::6c63:eaff:fe7c:2782 on veth685b816.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::30c2:abff:feb4:1491 on veth5e4de85.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::8c9e:d4ff:feb5:5a4d on veth5e3645e.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::a8ad:abff:fe0a:537b on veth832ce0f.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::f81a:a5ff:fe8b:4745 on vethd949b2a.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::a411:aeff:fe19:7cfa on veth2bab9f2.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for 192.168.122.1 on virbr0.IPv4. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::42:4eff:fe01:7855 on docker0.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for 172.17.0.1 on docker0.IPv4. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::587d:17ff:fe4d:7827 on br1.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::ac56:eff:fe8a:f6a6 on br0.*. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for 192.168.1.115 on br0.IPv4. Aug 31 00:02:10 Zack-unRAID avahi-daemon[7109]: Registering new address record for fe80::ec4:7aff:fe32:b928 on eth0.*. Aug 31 00:02:12 Zack-unRAID sSMTP[18651]: Sent mail for ra2258@gmail.com (221 2.0.0 closing connection 34-v6sm13553632wra.20 - gsmtp) uid=0 username=root outbytes=617 zack-unraid-diagnostics-20180901-1115.zip
  20. This was fixed by following the advice in this newer thread: Syslog spammed by Avahi
  21. Boom! That was it. It was the first one.... I had some static IPs that were also in the DHCP IP Range. Shrinking the range and doing some cleanup (which I should have done some time ago) seems to have fixed it. Really excited as I learned something new and got rid of a very annoying error. Many thanks!!
  22. Posting diagnostics as requested. Thanks! zack-unraid-diagnostics-20180501-0801.zip
  23. Hi John, Thanks for your reply. The solution you listed is the one I tried with no luck. When I get home, I will post my diagnostics. Thanks!
  24. My syslog is getting spammed by "Avahi". See attached log. I have seen some other threads on this happening. It seems to be an ongoing thing and I would like to address it if possible. It gets caught by Fix Common Problems as it tends to start to fill up my syslog so I get a warning from FCP. Thanks for a pointer to other threads or what I can do on my setup to fix this. This log is shorter (but shows the issue) as I tried one of the solutions in the forum and this required a reboot (but it did not work) Thanks! zack-unraid-syslog-20180429-1111.zip
  25. Howdy, Just received this from "Fix Common Problems".....Call traces found on your server. I am on unRAID 6.4.1 - system information below. Logs attached. I had a quick look and I think it may be something in the docker logs? Thanks for any ideas on where I should look. Thanks! zack-unraid-diagnostics-20180311-1135.zip