Jump to content

BDPM

Members
  • Content Count

    20
  • Joined

  • Last visited

Community Reputation

1 Neutral

About BDPM

  • Rank
    Member
  1. BDPM

    Docker Updates

    Well, damn. Thanks. I will check out that thread. Thanks.
  2. I have several containers running, and today, for the third day in a row, every single linuxserver.io container has had an update. They've all been running great, for at least 6 months, and still are, even after updates. I have no problems. But, is this normal? I've noticed in the past that linuxserver.io updates more frequently than others, but this seems crazy. Just wondering if anyone else is having the same thing happen. BDPM
  3. Thanks Saarg. I was typing when you were I guess.. Any advice for future non freaking out? LOL BDPM
  4. Okay.. so I noticed that one of the cache drives in the cache pool was missing. I shutdown the server, jiggled the cables for the cache drives, made sure they had power and data cables were secure. They are the clippy kind so I dont know why they could have gotten loose, but when I rebooted, the cache pool started the rebuild of btrfs thing and all of the containers and appdata folder is restored. whew... that freaked me out for a few minutes there. Any input to prevent more freaking out in the future would be appreciated. I think I would rather have a bigger cache drive than pooling two 120's. BDPM
  5. I did the following AFTER the 6.7.2 update, along with a radarr update I believe. I was watching a movie on my plex this morning. I also was checking sonarr and radarr this morning. Everything was working as far as I could tell. I went and ran some errands, stopped for lunch, then came home to plex locked up. It wouldnt restart. So I just rebooted the server. When it came back up all of my docker containers are gone and the appdata folder is empty... HELP!!! I do have appdata backup and it was successfully backed up last night, but I would like someone who is smarter than I to suggest method to get back up and running. BDPM BDPM Diagnostics.zip
  6. Thank for the reply. Honestly, I have seen MIB mentioned in documentation, but I dont know what that is. LOL I went to observium IRC channel, asked around, and was told that I needed to add the agent to whatever server I wanted to monitor. I started to do that via this site https://docs.observium.org/unix_agent/ seems to me that it should work, but when I got to "restarting the service" I couldnt figure out how on unraid, so I just rebooted the server. When I went to continue the setup, I checked the observium_agent_xinetd file that had been scp over and edited. It was gone. So I tossed in the towel for the day. I used nerd tools to install xinetd, and it is still there. The edited file just poofs after reboot. Like I said previously, Observium working great, just missing the hard drive temps. I will attach a diagnostic file of my system if you wanna take a look. Much appreciated. Thank you for your time and efforts. BDPM BDPM-diagnostics.zip
  7. Thanks. The reason I asked here was because what ive found so far required editing the snmp config files which, in order to monitor unraid drives means editing unraid system files in the /etc/snmp folder.
  8. How do I get Observium to show hard drive temperatures? Ive searched around a bit and everything I find, I cant get to work. I see the current temps on the dashboard, but Im looking to trend the temps. Observium trends everything wonderfully, except hard drive temps. Any help would be greatly appreciated. BDPM
  9. BDPM

    6.7.0 NFS

    I have not
  10. Thanks for the input. Yes, in addition to the local backup, I rclone the same data to online storage (Backblaze B2) Thank You.
  11. Currently I have 1 parity and 2 data drives in my array. I also have a separate drive via Unassigned Devices plugin that I am using to backup important data. I was thinking that I could add that drive to the array and then exclude it from all of the other shares on the array. This way it is separate from shares and also protected by parity. Is this correct thinking or am I missing/forgetting something? Everything is great right now, I just thought it would be really nice to not only have the data backed up on a separate drive as I currently do, but to also have the drive protected by parity. Any input or suggestions welcome. Thanks BDPM
  12. Thanks. Yeah Im far from a linux guru, but I believe that the very nature of rsync or rclone require checks on both source and destination. It just baffles me as to why when I do a rsync or rclone test from array to array its nearly instantaneous, unless it needs to add or update file(s), but when I do the same test from array to the unassigned drive it takes nearly as long to simply check as it did to write the files to an empty directory. Im using --size-only and its all good. Adding or updating files as it should. Im doing 47,034 files for about 296.9 GB. Took time to put it up there, but when the daily rclone runs, it takes maybe 3 seconds. Works great. I use backblaze b2. I'll just keep using --size-only for my local backups. Works just fine. Thanks both of you for your input. Have a great day!
  13. BDPM

    6.7.0 NFS

    Not sure if this is a bug or what but here goes. NFS shares not being seen by Librelec-Kodi. In the 6.6.7 version I could browse and find nfs shares. After the update to 6.7 I cannot. I downgraded to 6.6.7 and was again able to browse for nfs shares. Re-Upgraded and was then again not able. I manually added the export into kodi and it works, just cannot browse for it. server-diagnostics-20190517-1714.zip
  14. If I rclone copy or rsync to a disk that's outside the array, unassigned, it always checks full list of files again every time. Takes as long as the first time. I can work around with size or time only, but... A) I dont know why it does this and I'm curious. B) I'm wondering if there's a better method. I have pictures, documents, music on the array, and a backup drive formatted ntfs outside the array for backups only. Rsync or rclone every night from the array to the backup drive and then to cloud storage. My current script uses rclone but I've tried rsync and get same result. They both copy new or changed files, then check every single other file in the destination. rclone copy /mnt/user/Documents/ /mnt/disks/3tb-backup/Documents/ Sending to cloud storage with rclone only takes seconds. No checking destination again. Any info and/or advise will be greatly appreciated. Thanks in advance BDPM
  15. Thanks. I'll check out turbo write. Not brand spanking new processor, but not 10yrs. I have AMD's version of core I7. Forget what it was called.