digitalformula

Members
  • Posts

    127
  • Joined

  • Last visited

Everything posted by digitalformula

  1. Unfortunately that doesn't reveal anything. I could throw some debug stuff in there though, I guess. Jul 27 10:59:16 ***-unRAID root: Starting go script Jul 27 10:59:16 ***-unRAID root: Starting emhttpd
  2. Ok, good to know. And no, I missed that info in the release notes - happy to accept fault for that. However, I wouldn't expect that change to cause /boot/config/go to fail/stop executing at that point. If that were the case I guess emhttp wouldn't start, either (and it does, since that's the webserver GUI, right?)
  3. Some, yes. Examples: - There's a folder called "shares". The share names within that folder are "anonymized" by having only the first and last letter included, but syslog.txt includes references to the full share name. I have shares based on customer names that I don't want exposed. - IP address info isn't anonymized i.e. my client IP addresses are in there - VM names are in logs/libvirt.txt and I have VMs matching project names - Hardware serial numbers are in syslog.txt e.g. HDD serials (this actually is a big deal for some client types) - sshd port number is in syslog.txt (etc) I can understand the potential argument that none of this is overly useful for malicious purposes if your network security is up to scratch, but that argument is negated by posting diagnostics on a forum. That said, IMO system information must be either 100% anonymized (*all* instance-specific/identifiable data removed) or there's no point in anonymizing at all. In my experience, logs can show why something happened without showing who it happened to. The disclaimer is that yes, there are situations where information is useless without knowing who was involved, but that's not the case here (again, IMO).
  4. Perfect, thank you. (Seriously, thanks - not trying to be a smartass).
  5. I'm running 6.9.0-beta25. No need for diagnostics yet since I just want to know if /boot/config/go is still used at this point. If the answer is yes, I'll worry about it then (I'm not comfortable with attaching those diagnostics dumps to forum posts, for fairly obvious reasons).
  6. As per the subject, is /boot/config/go still used? I'm asking as I have stuff in my /boot/config/go file that never gets run. Follow up question - If /boot/config/go is still used, why would the commands there get ignored? Here is /boot/config/go, for reference: #!/bin/bash # Start the Management Utility # modprobe i915 # chmod -R 777 /dev/dri # copy SSH keys mkdir -p /root/.ssh chmod 700 /root/.ssh cp /boot/config/ssh/authorized_keys /root/.ssh/ chmod 600 /root/.ssh/authorized_keys mount -o remount,size=384m /var/log /usr/local/sbin/emhttp & I have verified that authorized_keys is not where it should be after reboot.
  7. Sure, but that's why I asked if it's possible to make it user configurable. It wouldn't diminish the functionality of the plugin at all and could even still allow FCP to throw up a notification when an emergency release comes out. Edit: An "error" suggests something is broken. A plugin that isn't updated isn't something that's broken.
  8. Is it possible to add a user-defined alert level for the things FCP raises? To me it's not an error if FCP isn't the latest and isn't a warning if a Docker container isn't the latest (i.e. "if it ain't broke, don't fix it"). Showing those as errors and warnings is misleading.
  9. I would have thought that, too, but I have various aliases setup that do specify the appropriate private key when making the connection, and on a non-standard port now. ssh-agent shouldn't really be required after adding the key to the server's ~/.ssh/authorized_keys though (which I have done). I didn't "fix" it though. The workstation in question needed to be rebuilt and so I've just setup an unRAID-specific key pair that I'll use from now on. The original key is only used for some other private connections so doesn't need to be used for unRAID on this network. Thanks for responding, though!
  10. Where's the configuration or setting that says which IPs are blocked from connecting via SSH? I have checked /etc/hosts.deny but I believe it won't persist between reboots anyway. I can only connect if I use this: ssh -o IdentitiesOnly=yes root@unraid Additional info re why I'm asking: - Fix Common Problems said I was possibly compromised on March 14th. All the attempts came from the static IP of my workstation and all on a range of ports that aren't explicitly configured in unRAID - Until this afternoon SSH was still on port 22 but that's OK because of the other stuff below (it's not on port 22 anymore, but that's made no difference) - My ISP's router is a gateway device only and everything inside it is protected by various IDP and DPI devices (I would never use an ISP's "free" device as anything other than a gateway) - unRAID is not exposed to the internet, is not in any sort of DMZ and there are no port forwards or pinholes that allow SSH anywhere near it from the outside - I've rebooted - unRAID is the latest version - All containers and plugins are on latest versions - There's no access to anything from the outside except via a VPN from specific approved devices, including explicit MAC filtering - SSH keys in /boot/config/ssh are not 0 bytes in size, as per this: For that reason, I don't think I've actually been compromised - it was probably something else that I'm yet to track down. On that day I was making some changes and might've messed something up. Thanks
  11. I gave up waiting and moved PiHole to a CentOS 7 VM a while back. Deleted the PiHole container. Didn't have a lot of choice since I need VMs more than I need PiHole to be running as a container.
  12. how many subnet do you have? Is the computer that you tried to connect to pihole on the same subnet as the pihole? If not, try to use the same subnet, btw, pihole should have a web interface, try to open that as well Yup sorry I should've included that info before. unRAID, this desktop, the Pi-hole WebUI etc are all on the same subnet. Pi-hole and its UI are working fine and I can see traffic there + ads being blocked. Tested with speedtest.net and a few others that are known to be bad for ads. I don't really use KVM on unRAID anymore so I'll just ignore it for now. Thanks, though.
  13. Yes, same subnet - this part of the network is all on the same /24 (the other parts are DMZ etc, so completely isolated from unRAID). Edit: THe Pi-hole UI does show that IP as the "Pi-hole IPv4 address" though. Edit 2: Btw, Pi-hole is functioning just fine and ads are being blocked. I can see the stats in the UI.
  14. It is already configured as bridge. There's a setting for "Server IP" but no matter what I set that to, that IP doesn't respond to anything. It's no biggie, though - stopping KVM has worked.
  15. Hello - Is dnsmasq required for the operation of KVM? Asking as I'm trying to install Pihole but the Docker container is failing due to port 53 already being in use. Netstat tells me it is in use by dnsmasq on local address 192.168.122.1 and I've tracked that down to 192.168.122.0/24 being attached to virbir0 in the routing table. Stopping KVM allows the container to work. Can I still use KVM if I somehow remove/stop dnsmasq? Docker is more important than KVM on this server so it's not a biggie if the answer is no.
  16. I mean exactly that. Not deleted, just completely stopped/down/unavailable. GUI shows no shares and SSH listing of /mnt/user just shows "????????". No, restarting NFS doesn't help. The only way to get them back to reboot the entire unRAID server. No, I'm deleting them from an NFS mount in my Linux client (Ubuntu 19.04, but was 18.04 before that). Yes, they're all in one folder.
  17. @limetech Thanks for your response. I have tried setting the fuse_remember Tunable setting to 600 and, while it did seem to help for a few minutes, the problem came back straight away. Here is what I did: - Created 1000 zero byte files in /mnt/user/sys/1000_files - Deleted them via NFS mount on my client. This worked; I don't recall this ever working in the past. - Created 1000 1.2MB files in /mnt/user/sys/1000_files - Deleted them via NFS mount on my client. All shares immediately disappeared. FYI
  18. @limetech or someone ... ? Please? Yeah you can think I'm being a dick or demanding but this is a genuine issue.
  19. Bump? Surely I'm not the only one that thinks shares disappearing when you delete files is a bad thing? 😞
  20. By the way, I have tried setting up unRAID to point at itself, which does work. However nothing shows up in the Syslog at all when unRAID crashes as I've described above. For that reason I'd say the logs are pretty much useless. What you see below is when I started Syslog and then rebooted after I forced unRAID to bomb again. Jul 5 13:52:01 df-unRAID rsyslogd: [origin software="rsyslogd" swVersion="8.1903.0" x-pid="26176" x-info="https://www.rsyslog.com"] start Jul 5 14:00:47 df-unRAID root: Delaying execution of fix common problems scan for 10 minutes Jul 5 14:00:47 df-unRAID emhttpd: Starting services... Jul 5 14:00:47 df-unRAID emhttpd: shcmd (205): /etc/rc.d/rc.samba restart Jul 5 14:00:48 df-unRAID rsyslogd: [origin software="rsyslogd" swVersion="8.1903.0" x-pid="5461" x-info="https://www.rsyslog.com"] start Jul 5 14:00:49 df-unRAID root: Starting Samba: /usr/sbin/nmbd -D Jul 5 14:00:49 df-unRAID root: /usr/sbin/smbd -D Jul 5 14:00:49 df-unRAID root: /usr/sbin/winbindd -D
  21. Yes, but I believe that requires the setup of a Syslog server (which I haven't done, yet). Oh well, I guess I'll set one up and make unRAID crash again, since I can reproduce this issue. Setup unRAID to point at itself, which I assume is a valid configuration.
  22. Yes they will but I've posted them before. The issue is identical. Besides, being a production NAS it is far more important to get it back up and running than remember to dump diagnostics every time (which I think should be stored permanently elsewhere, although I don't know how to do it/if there's a way to do it).
  23. This is re a topic I posted in February. I posted logs then and asked if there was any update - no response. Here's the post: I'm on unRAID 6.7.1 right now and this is still an issue. I've just tried to delete 1365 files from an NFS share and the shares immediately disappeared, effectively taking our production NAS offline. The only way to get them back is to completely restart the NAS. @limetech You acknowledged there's a bug report for this back in February. Did anything happen with it?
  24. Yeah I looked into that. You're right, though - that does achieve the desired result but is just more time-consuming than I'd like since the backup data is literally 100s of 1000s of tiny files that are anywhere from 1MB to 50MB. Thanks for the suggestion, though.