BDPM

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by BDPM

  1. It is the original install flash drive. May 2019 Might be a good idea to replace it. Regardless if it is the problem. Thanks!
  2. Good Morning All, I went to update my nextcloud this morning, but before I run their updater I always like to make sure the docker container is up to date. When I went to login to Unraid, the normal login screen was gone. It was just plain text with some lines of encountered errors. I tried refreshing the url, nothing. The links were click-able, so I went to the terminal, did "sudo shutdown -r now". Unraid rebooted, and now seems to be back to normal. The only thing I notice is that it is now doing another parity drive check. It just did one the other day. I have it scheduled to run monthly on the first of the month. It takes about 12hrs. Anyway, I will add the text that I copied off the screen before I rebooted. Im going to make an attempt to look for log files to see if there is any indication in them. Although I have no idea what Im looking for or where to look for it. LOL. If anyone has any suggestions on what actions I should take, if any, I would be grateful. I have been using unraid for a few years now, and love it. Basically, set it and forget it after all of the initial setting up. Thank you all. Unraid_Errors.doc
  3. I think i picked up the term resilver around ZFS or Freenas days. But yes, rebuild, same thing.
  4. Custom Backup scripts. I will re-silver and then run the scripts. Thanks
  5. Drive will be here in a day or two. Resilver or let backup scripts do it? Disk 3 is only for backup purposes.
  6. Thanks all.. Diskspeed docker revealed problem below. So it looks like disk 3, my backup drive, is failing. That sucks, but I have critical data remotely backed up as well as locally. Non critical data will get backed up during the daily backup. My question now is, should I just replace the drive and let backup scripts do their job, or replace the drive and re-silver it?
  7. yes. Still horribly slow. Ill try the speed test docker Total size:6 TB Elapsed time:10 minutes Current position:2.60 GB (0.0 %) Estimated speed:3.9 MB/sec Estimated finish:17 days, 14 hours, 29 minutes Sync errors corrected:0 I tried turning off all dockers and the vm that was running. It helped. Got up to around 9MB/sec LOL.. I canceled the check. Restarted the server. Started check again. Above is where I stand now.
  8. Honestly, i dont know. Mover runs on a schedule I think. The daily data backup runs and is completed as normal. Weekly appdata backup isnt running. It has run before in 12hr. I dunno what has been changed.
  9. Parity check 1 time a month. They used to run about 12 to 15 hrs. Now its saying 17 days to complete. I dont even know where to begin looking. Any suggestions would be greatly appreciated server-diagnostics-20200908-0431.zip
  10. Well, damn. Thanks. I will check out that thread. Thanks.
  11. I have several containers running, and today, for the third day in a row, every single linuxserver.io container has had an update. They've all been running great, for at least 6 months, and still are, even after updates. I have no problems. But, is this normal? I've noticed in the past that linuxserver.io updates more frequently than others, but this seems crazy. Just wondering if anyone else is having the same thing happen. BDPM
  12. Thanks Saarg. I was typing when you were I guess.. Any advice for future non freaking out? LOL BDPM
  13. Okay.. so I noticed that one of the cache drives in the cache pool was missing. I shutdown the server, jiggled the cables for the cache drives, made sure they had power and data cables were secure. They are the clippy kind so I dont know why they could have gotten loose, but when I rebooted, the cache pool started the rebuild of btrfs thing and all of the containers and appdata folder is restored. whew... that freaked me out for a few minutes there. Any input to prevent more freaking out in the future would be appreciated. I think I would rather have a bigger cache drive than pooling two 120's. BDPM
  14. I did the following AFTER the 6.7.2 update, along with a radarr update I believe. I was watching a movie on my plex this morning. I also was checking sonarr and radarr this morning. Everything was working as far as I could tell. I went and ran some errands, stopped for lunch, then came home to plex locked up. It wouldnt restart. So I just rebooted the server. When it came back up all of my docker containers are gone and the appdata folder is empty... HELP!!! I do have appdata backup and it was successfully backed up last night, but I would like someone who is smarter than I to suggest method to get back up and running. BDPM BDPM Diagnostics.zip
  15. Thank for the reply. Honestly, I have seen MIB mentioned in documentation, but I dont know what that is. LOL I went to observium IRC channel, asked around, and was told that I needed to add the agent to whatever server I wanted to monitor. I started to do that via this site https://docs.observium.org/unix_agent/ seems to me that it should work, but when I got to "restarting the service" I couldnt figure out how on unraid, so I just rebooted the server. When I went to continue the setup, I checked the observium_agent_xinetd file that had been scp over and edited. It was gone. So I tossed in the towel for the day. I used nerd tools to install xinetd, and it is still there. The edited file just poofs after reboot. Like I said previously, Observium working great, just missing the hard drive temps. I will attach a diagnostic file of my system if you wanna take a look. Much appreciated. Thank you for your time and efforts. BDPM BDPM-diagnostics.zip
  16. Thanks. The reason I asked here was because what ive found so far required editing the snmp config files which, in order to monitor unraid drives means editing unraid system files in the /etc/snmp folder.
  17. How do I get Observium to show hard drive temperatures? Ive searched around a bit and everything I find, I cant get to work. I see the current temps on the dashboard, but Im looking to trend the temps. Observium trends everything wonderfully, except hard drive temps. Any help would be greatly appreciated. BDPM
  18. Thanks for the input. Yes, in addition to the local backup, I rclone the same data to online storage (Backblaze B2) Thank You.
  19. Currently I have 1 parity and 2 data drives in my array. I also have a separate drive via Unassigned Devices plugin that I am using to backup important data. I was thinking that I could add that drive to the array and then exclude it from all of the other shares on the array. This way it is separate from shares and also protected by parity. Is this correct thinking or am I missing/forgetting something? Everything is great right now, I just thought it would be really nice to not only have the data backed up on a separate drive as I currently do, but to also have the drive protected by parity. Any input or suggestions welcome. Thanks BDPM
  20. Thanks. Yeah Im far from a linux guru, but I believe that the very nature of rsync or rclone require checks on both source and destination. It just baffles me as to why when I do a rsync or rclone test from array to array its nearly instantaneous, unless it needs to add or update file(s), but when I do the same test from array to the unassigned drive it takes nearly as long to simply check as it did to write the files to an empty directory. Im using --size-only and its all good. Adding or updating files as it should. Im doing 47,034 files for about 296.9 GB. Took time to put it up there, but when the daily rclone runs, it takes maybe 3 seconds. Works great. I use backblaze b2. I'll just keep using --size-only for my local backups. Works just fine. Thanks both of you for your input. Have a great day!
  21. Not sure if this is a bug or what but here goes. NFS shares not being seen by Librelec-Kodi. In the 6.6.7 version I could browse and find nfs shares. After the update to 6.7 I cannot. I downgraded to 6.6.7 and was again able to browse for nfs shares. Re-Upgraded and was then again not able. I manually added the export into kodi and it works, just cannot browse for it. server-diagnostics-20190517-1714.zip
  22. If I rclone copy or rsync to a disk that's outside the array, unassigned, it always checks full list of files again every time. Takes as long as the first time. I can work around with size or time only, but... A) I dont know why it does this and I'm curious. B) I'm wondering if there's a better method. I have pictures, documents, music on the array, and a backup drive formatted ntfs outside the array for backups only. Rsync or rclone every night from the array to the backup drive and then to cloud storage. My current script uses rclone but I've tried rsync and get same result. They both copy new or changed files, then check every single other file in the destination. rclone copy /mnt/user/Documents/ /mnt/disks/3tb-backup/Documents/ Sending to cloud storage with rclone only takes seconds. No checking destination again. Any info and/or advise will be greatly appreciated. Thanks in advance BDPM
  23. Thanks. I'll check out turbo write. Not brand spanking new processor, but not 10yrs. I have AMD's version of core I7. Forget what it was called.
  24. I dunno why I feel the need for 2 parity. I'm new to Unraid. Lol I've only ever run mdadm raid10 and zfs raidz10. Both of which require 4 drives. 2 data, 2 parity. And the ~10tb of available space has never seen more than maybe 7tb. Talk to me... I'm open to change. I do have external backup of super critical my wife will kill me if I lose it data. Offsite at a buddy's house. I rsync to his place and he to mine via VPN. Music, TV, Movies, VM's, Dockers, etc is what I have on data drives, can be replaced easy enough, just take forever. So I run raid array, now Unraid. I'm open to any suggestions and input. Thanks BDPM