ptirmal

Members
  • Posts

    66
  • Joined

  • Last visited

Everything posted by ptirmal

  1. I have the same issue. I also noticed after my Fios went down for the first time in 5 years and I have a new IP and none of my services address was correctly updated 😐.
  2. That's what I've been doing but wasn't aware that's why it isn't installed. Good to know.
  3. Is there something else you need to do to have it installed on boot? I have done this and it doesn't get installed on boot. I have to manually install the package.
  4. I haven't used borgmatic before. Did you have borg running on previous versions?
  5. I will be following this. I have been putting off updating to 6.11 because I rely on borg backups.
  6. Thanks so much for this. Didn't have time to look far into it, appreciate the info!
  7. If there is no data on the disk you want to swap you can create a new config. This is how you would shrink your array. You would rebuild parity. In this case you either trust the data and rebuild parity or rebuild the empty disk from parity.
  8. Can someone tell me if this is something I need to be concerned about? I am running an Asrock J3455 board with bios 1.80. I have been using this as an offsite backup server. I recently added fix common problems and saw this. It seems to happen at boot. Google doesn't seem to tell me much. It doesn't seem to affect stability as I only interact with it when I want to add a drive or something. backup-diagnostics-20211116-0648.zip
  9. I have been noticing high memory usage 80-90% from my server. started getting warnings from fix common problems to post my diagnostics. I don't recall any changes recently, the last reboot was over 60 days ago. My server has 16GB of RAM and runs several dockers with memory limits on most of them to keep them from using too much. I do notice I usually have a large amount of RAM cached (6+) but not sure if 80-90% usage is something I need to do something about, seems to be having no issues otherwise. unraid6-diagnostics-20210209-0605.zip
  10. What is the protocol for restoring individual files/folders. Can you browse the .tar via unraid? If so, can someone help with commands to navigate it. My tar is ~50GB and I need to restore a folder that's probably under 10MB. Winrar doesn't seem to like trying to open it via SMB on my Windows 10 computer. Not sure what the best way to do this is.
  11. I'm running my edgerouter x now. Have been for 5 or so days. Since I disabled password authentication I haven't gotten any messages from the fix common problems plugin so I assumed it was ok but when I just checked I still was seeing connection attempts like this I just disabled ssh to keep it clear but I am not sure what else to check. It's weird that I hadn't changed any network settings and I only started getting these connections when I rebooted the server. Any other ideas?
  12. So I thought I had this figured out but I don't. I had disabled password auth for ssh which stays but on reboots ssh becomes enabled again. It looked like my logs were ok but checking back now it seems I have a bunch of disconnects from random IP/port combinations. Here's the kicker, I switched my router back to my edgerouter x... I don't have any DMZ setup and I didn't in the Unifi setup either... How can I narrow why these IP/ports are/were making it to my server?
  13. Is there any way to disable SSH on startup?
  14. Sorry, yes NGINX being the Letsencrypt docker container for reverse proxy. It's been set up like this for a long time and I transitioned from an Edgerouter to a USG maybe two months ago. The ports are correct. I didn't see anything in the Unifi settings that show it was in a DMZ. I will have to see if it changed somehow but in order to do so I have to boot my server up and turn on the docker for Unifi... I'm thinking disable SSH so I can at least get up and running.
  15. Can someone help me diagnose what is going on... Either my server or network? I had to reboot my server this AM, unrelated. After powering it on Fix Common Problems tells me I have invalid login attempts, interesting. I check and see login attempts (SSH2) from WAN IPs on random ports. My network setup is Ubiquiti Unifi and I did NOT expose any of the ports that are shown in the log. I was not getting these login attempts before I restarted. I have only exposed ports for NGINX and Plex on my server. I have been running some port scan checks: https://www.grc.com/default.htm and I do not see anything unusual, it tells me it passed - mostly stealth. I currently run the Unifi controller from my server which I now see is not ideal. I don't see any firewall rules that would allow these connections to make it to my server. Not really sure where to go from here. The GRC website is saying everything is good yet I am getting WAN side IP login attempts. I have shut down the server for now. Should I attempt to disable SSH? unraid6-diagnostics-20200525-0759.zip
  16. I have fix common problems run weekly, woke up to this notification: Your server has run out of memory, and processes (potentially required) are being killed off. You should post your diagnostics and ask for assistance on the unRaid forums. This is what I can find in the syslog: I have a 2GB limit on Sonarr and a 1GB limit on Radarr dockers. Did one of these hit the limit and cause that error? Not sure what to make of this. Server has been up and running fine for 10 days. unraid6-diagnostics-20190324-0823.zip
  17. Manually, or maybe a script depending on how the ransomware propagated the encryption. That's where versioning would come in. I'm not 100% sure how rsync works with ransomware, would it simply see the change in file and overwrite the old file or would it see it as a new file and save the old file? I don't know that read only access really protects you as you're simply talking about read only for the backup machine. Your main server that you're backing up clearly has write access, no? I did the rsync over ssh thing, but didn't think it was a powerful enough solution for the long term.
  18. If those files you're backing up get changed they will propagate to your backup, rsync doesn't version. You could look at rclone or Borg like you mentioned before. You can also add some lines so changed files keeps the old file by renaming it with the date, this kind of works but is a real pain if dealing with a high quality of files.
  19. You don't have to backup locally first with Borg, it can backup directly over SSH. No need for any additional storage space. You can additionally create append only SSH keys for unRAID so Borg only has append-only access to the repository, which can allow further lock down of your backups.
  20. I think I figured out what was going on. I believe when I upgraded to 6.6.7 either my unassigned device drive defaulted to not auto mount or it was never set as such and didn't mount. That drive is used for my security cameras via the Zoneminder docker. Since I was calling for /mnt/disks... and it wasn't mounted it was storing those files in rootfs. I noticed rootfs went from 25% to 50% after 14 or so hours and figured that out. Hopefully that was it.
  21. Got through the parity check and no errors. Uptime 1 day 18 hours. At this point I think it's safe to say it wasn't a hardware issue but a software one. Now to prevent it from occurring again. I'm thinking a Docker caused the system to use too much RAM. I've read this is possible but I am not sure if there is anything I can do to limit a Docker's ability to do this. I read something about file calls from an app doing this. If I limit RAM in a Docker will this prevent that? These are the Dockers I've been running for a while, only CAdvisor has access to rootfs, I disabled it for now. cadvisor collabora duckdns duplicati letsencrypt mariadb nextcloud ombi plex radarr sabnzbd sonarr tautulli transmission - mostly turned off zoneminder If there are any that you think I should pay special attention to let me know. I've had them on auto update for the last few weeks but I think I'm going to turn that off for now. I was digging through log files when the parity disk was showing missing and I guess rootfs was full and came across this line: Mar 11 04:26:20 unraid6 rc.diskinfo[31541]: PHP Warning: file_put_contents(): Only 4096 of 7912 bytes written, possibly out of free disk space in /etc/rc.d/rc.diskinfo on line 266 Can someone shed some light on this? How severe is this?
  22. I shrunk my array to remove some smaller disks from it since I had excess capacity and didn't want them just sitting around when I could use them for other tasks.
  23. So it survived the night and the results of df -h look like it's still only using 11% on rootfs Can someone explain why it shows so many reads on these disks? It's still in maintenance mode: and it doesn't look like much is going on: I guess my next step is to bring the array online out of maintenance mode without docker or VM and start a parity check. New diagnostics attached. unraid6-diagnostics-20190312-0610.zip
  24. No. When this has happened before, the dashboard shows what's show in the screenshot above. I can't access "Main" to see what the disks are showing, it's just perpetually in the loading animation.