Aquenon

Members
  • Posts

    22
  • Joined

  • Last visited

Aquenon's Achievements

Noob

Noob (1/14)

0

Reputation

  1. After rebuilding my server, my CloudBerry Backup is telling me “First argument must be a string, Buffer, ArrayBuffer, Array, or array-like object.” when I try to launch the WebUI. Is this because it’s a new machine other than the hard drives?
  2. Hi, This is an afterthought, but I do have 2 5.25” empty drive bays that I’d like to do something with. Ideally fan control/temp sensor that interfaces with the motherboard if possible. Now I don’t have much room especially in the top bay because of the tubes for the radiator. Picture of room (or lack thereof) in my 2 5¼” drive bays Looking for possibilities, but it’s not something I’m dead set on doing. Just something I’d like to install if it’s possible. This is a Fractal Design R5 Define case. The radiator is so close because it’s a 140mm x 420mm radiator… I might went overkill… but it is cooling a 13900F CPU. If anyone has any ideas, I’d appreciate it. Thanks!
  3. Thanks Hoopster. Not sure why I couldn’t find that. I unchecked this option and I haven’t seen this error again since so I’ll mark your post as a solution. Though I had this exact same setup on the old hardware, and it never caused problems on the network.
  4. Thanks. I should have thought about that to begin with. That took care of the old bonded interface, but not the network outages. Before I start docker, there are only 2 entries in the routing table: default 10.0.0.10 via br0 10.0.0.10/24 br0 When docker starts, I get the routing table attached. Using the 'docker network ls' and 'docker network inspect', the entries for 172.17.0.0/16 and 172.20.0.0/16 appear. I don't know where the shim-br0 entries are coming from. They're obviously related to docker, but don't show up with the 'docker network' commands, and my theory is these are the problems. Unable to delete them. Only way to get rid of them is to disable Docker again.
  5. Hi, I've been troubleshooting this for a week and have finally found what's causing it, I just don't know how to fix it. I recently had my unRaid server die. Probable motherboard issue. So I bought new motherboard and new CPU (old CPU was 6600K). New CPU is 13900F. Some issues persist, as I wasn't planning on upgrading and so never got to undo some things before upgrading. One thing was the old system had two NICs and I had them bonded in a failover. The new system seems to be operating fine, but upon boot up, it seems to hang for a bit and then mentions network bond failed. I don't know if this is causing the problems or not, but thought I'd mention it. There's nothing to change in the network settings, but there is some config left over from the first machine. The problem comes in when I start up Docker. As long as Docker isn't running, everything on the network is fine. When I start it up however, I have two MAC addresses getting the same IP. The MAC address of the NIC has been entered into my DHCP server so unRaid always gets the same IP. But there is another MAC address getting the same IP. What ends up happening is something in docker fights my DHCP server, and then suddenly my entire network goes down. Every device gets a self-assigned IP (169.254.x.x) for a few minutes until the DHCP server gets control again, then the IP addresses get assigned like their supposed to, and I get my network back. Then it happens again a few minutes later. The only way I have found to stop it is to shut Docker down on my unRaid server. I don't know if that second MAC address getting the same IP is what is causing this, but I have tried to block that other MAC on the network. When I do, unRaid/Docker just changes the MAC and the same behavior starts again. I finally found what that MAC address belongs to today. shim-eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.10 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::582b:1cff:feb4:5806 prefixlen 64 scopeid 0x20<link> ether 5a:2b:1c:b4:58:06 txqueuelen 1000 (Ethernet) RX packets 21069 bytes 4794289 (4.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 188380 bytes 20486189 (19.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 This interface doesn't exist unless Docker is running. And this is what shows up in my DHCP logs when the issue happens: Date Facility Severity Process PID Line 2023-05-12T10:57:49-04:00 23 Error dhcpd 67826 dhcp.c:4164: Failed to send 300 byte long packet over igc1 interface. 2023-05-12T10:57:49-04:00 23 Error dhcpd 67826 send_packet: No buffer space available 2023-05-12T10:57:46-04:00 23 Error dhcpd 67826 dhcp.c:4164: Failed to send 300 byte long packet over igc1 interface. 2023-05-12T10:57:46-04:00 23 Error dhcpd 67826 send_packet: No buffer space available 2023-05-12T10:57:46-04:00 23 Error dhcpd 67826 dhcp.c:4164: Failed to send 300 byte long packet over igc1 interface. 2023-05-12T10:57:46-04:00 23 Error dhcpd 67826 send_packet: No buffer space available 2023-05-12T10:57:44-04:00 23 Error dhcpd 67826 dhcp.c:4164: Failed to send 300 byte long packet over igc1 interface. 2023-05-12T10:57:44-04:00 23 Error dhcpd 67826 send_packet: No buffer space available 2023-05-12T10:57:38-04:00 23 Error dhcpd 67826 dhcp.c:4164: Failed to send 300 byte long packet over igc1 interface. 2023-05-12T10:57:38-04:00 23 Error dhcpd 67826 send_packet: No buffer space available 2023-05-12T10:57:24-04:00 23 Error dhcpd 67826 dhcp.c:4164: Failed to send 300 byte long packet over igc1 interface. 2023-05-12T10:57:24-04:00 23 Error dhcpd 67826 send_packet: No buffer space available 2023-05-12T10:57:24-04:00 23 Error dhcpd 67826 dhcp.c:4164: Failed to send 300 byte long packet over igc1 interface. 2023-05-12T10:57:24-04:00 23 Error dhcpd 67826 send_packet: No buffer space available 2023-05-12T10:57:22-04:00 23 Error dhcpd 67826 dhcp.c:4164: Failed to send 300 byte long packet over igc1 interface. 2023-05-12T10:57:22-04:00 23 Error dhcpd 67826 send_packet: No buffer space available 2023-05-12T10:57:21-04:00 23 Error dhcpd 67826 dhcp.c:4164: Failed to send 300 byte long packet over igc1 interface. 2023-05-12T10:57:21-04:00 23 Error dhcpd 67826 send_packet: No buffer space available Does anyone have any idea what I need to do to stop this? I've included the server diagnostics if it's needed. Thanks, Scott server-diagnostics-20230513-1627.zip
  6. Hi, with Dynamix Fan Auto Control, there are no fan controllers listed in PWM controllers. I’m using the motherboard right now. What PWM controllers are compatible with this plug-in? Thanks!
  7. Hi, I know this is an odd question. But I need to back these files up, and the only thing I have to send them to is an old 2008 Mac Pro who's only reason for running these days is as a Time Machine server (and possibly a Tdarr node again, but I digress). I tried rsync but wouldn't work and I think I found out why when I tried LuckyBackup. macOS comes with an ancient version of rsync, and I can't get rid of it. I installed the latest version with Homebrew and while it works on the Mac, when rsync with or without LuckyBackup tries to connect, it still connects to that old version. Duplicati is out, as no matter what I try to do, it's incredibly slow, and has been like that for every system I've tried it on, not just this scenario. Looking for options that would work in my scenario. I've though about creating a macOS VM on unRaid (if that's even possible) and then have it backup via Time Machine, or make a Linux VM on the Mac Pro for LuckyBackup to connect to. But I hate VMs, all those resources locked to it whether it needs them at that moment or not. So if I have to, I will... but I would rather find a solution that doesn't involve VMs if I can. I'm mainly looking for open-source solutions before I go searching for paid solutions, but will welcome suggestions from both categories for me scenario. PS: I did try to turn the Mac Pro into another unRaid server, but it would not boot from the unRaid drive. Honestly, that is my preferred method here, but I just can't get unRaid to boot on it. Thanks in advance!
  8. Apologies if this isn’t the right place, but this is where the image Support Forums sent me. I’ve been trying to get BackupPC up and running (off and on) for probably a year. Most info I can find online is years or decades old. I research a bit, lose patience, then it sits there not running for a couple months. Trying to backup a Windows 10 machine, I downloaded Cygwin-rsyncd to it (and I’ve tried the default SMB as well). When trying to run the backup, I get “backup failed (No files dumped for share C$)” Any ideas? And I’m hoping this was the correct place. Thanks
  9. But updates are supposed to show up there? That’s all I see.
  10. Hi, This might be because of differences between 6.9.2 and 6.10. But I keep two shares open on an old Mac Pro. They are for used for Tdarr Node requirements. One is to the media share. The other is the temp share. The media share is working fine. The temp share is mostly working fine, and it worked 100% fine until now. My unRaid box has a lot of RAM, 64GB worth. So all transcoding is done in RAM (instead of to a transcoding directory on the drives) using the /tmp directory. And that’s what’s not working on the remote computer now (everything on the unRaid box in relation to /tmp is working fine). I had made a link in the temp user share that mapped to /tmp. That way Tdarr could read from the media share and transcode in the temp share (/temp/tmp). But now, while everything else in the temp share is working correctly, the only thing I can see from the old Mac Pro is that /tmp is there. I cannot see inside it like I used to, and now Tdarr Node won’t work because it doesn’t have access to the transcoding location. Is this some new security in 6.10? Or coincidence? Thanks, Scott
  11. It is. Oddly enough, it’s showing me things today. Icon beside Action Centre is alternating between yellow and red, but what’s in there is just things that have been updated, all Docker images I believe. I did recently upgrade to 6.10, but that was done before posting, so I don’t know why it’s suddenly working, but I’m glad it is. Does the icon always alternate colors like that, or does the red mean there’s something else that should be showing?
  12. Glad I went searching, because it appears on mine about half of the time and I wondered what it was for. But every time I click/tap it, it comes with No matching applications found. There has to be something it’s seeing, correct? Or, is this hopefully just a UI glitch (which I can live with). Thanks, Scott
  13. It wasn’t a CloudFlare problem, though I double checked to be sure. I had already done steps 1-3 of your debug section, and step 4 worked as well, but it was your curl commands that led me to my problem. They worked just fine until I swapped the IPs with the URLs and suddenly I wasn’t getting any response. No codes, no anything, it was timing out. I went back to my app and it was working fine. Then digging into the app, I found that it had its own DNS settings and wasn’t using mine. When I forced it to mine, it started timing out as well, although every other URL worked just fine. I pulled out my phone and disabled WiFi so I was coming at it externally, and every site I set up in NPM worked perfect. Turns out my DNS server had a setting that wouldn’t allow my domain to be forwarded on to an upstream server. I turned that setting off, and everything works. I knew it was something simple, and thankfully your suggestion led me to it. Thank you!
  14. I can’t figure this out. I adjusted unRAID’s http/https ports so I could assign 80 and 443 to NPM (I also had the same problem before I changed this, but this is current configuration). My router has 80 and 443 forwarded correctly. If I type my domain in by itself, I get the NPM congratulations page. But if I put anything in front of it, such as unifi, plex, nginx, or www, I get ‘cannot open the page because cannot connect to server’. But all services are running. nginx forwards to port 81 and even when clicking the entry from within nginx on port 81, it still cannot connect to the server. I am using CloudFlare, but dns only, no proxy there. The entries are there. I use a wildcard, but I’ve added the explicit cname as well when trying to troubleshoot. With the wildcard there, no matter what I put in front of my domain, dig gives me the correct IP. I know this is the reason I get internal errors when requesting an SSL certificate. I just looked, and all settings for the container are the default you have it set to other than the ports actually being 80, 81, and 443 (unRAID’s GUI is now 60080 and 60443). The only thing different is the network is using the custom bridge everything else uses so I can refer to other containers by their name. I’m not doing that in NPM at the moment, I’m using actual IPs. But I have tried names and IPs, and still same issue. I just set it back to the default unRaid bridge to troubleshoot, still having the issue. I just know this is something simple given my luck, I just cannot figure out what it might be.
  15. Now that I have unRaid back up (Thanks Squid), I have an idea that I would like to explore more. Currently there are 2 internet connections in the house. One for general use for everyone, the second for my job as I work from home, and before I got the second connection there were times when things would get a bit dicey depending on what others were doing. The 2nd connection takes care of all that and now video calls don’t stutter as much. The thing is, this connection is dead from 6pm to 9am the next day, and all weekend. Which got me to thinking. Is there a way with the bonding modes to load balance unRaid across the 2 connections? I figured out a script that would turn the 2nd NIC off during work hours. Curious if this is possible. Thanks