Jump to content

dalben

Members
  • Content Count

    1215
  • Joined

  • Last visited

Community Reputation

10 Good

About dalben

  • Rank
    sleepy
  • Birthday 09/30/1966

Converted

  • Gender
    Male
  • Location
    Singapore
  • Personal Text
    Too old to know everything

Recent Profile Visitors

1000 profile views
  1. Thanks John. You're right. I had some issues with a drive so I moved in my hot spare that I normally use as an unassigned drive for backups. CA Backup/Restore ran, couldn't find the path and I guess as you say backed up to memory. I didn't realise that was the behaviour of CA B/R, something to be aware off in the future.
  2. My server was in a strange state when I got home. I logged into the webgui and it showed no disks - data, parity, cache, boot etc. The Main/Array Operation tab just had two button, reboot and shutdown. I could SSH in and look around, everything seemed mounted. I tried to reboot from the gui and nothing happened but a blank page. First reboot from command line did nothing and returned a prompt. Second one seemed to do something, wait, something else, then as I was tailing the syslog, it seemed to go through it's startup sequence by installing plugins and what not. It finished the startup sequence and kicked into a parity check. From the webgui everything seems normal now. I'll let the parity run. Attached are the diagnostics and a manual copy of the syslog when it was in this weird place. Any ideas? tdm-diagnostics-20190306-2110.zip syslog
  3. OK, thanks. I'll kick one off tonight then to see how it goes. As an aside, it might be a worthy addition to the Parity Check summary to say it did correct xxxxxx errors to avoid the easily confused like me
  4. So the correcting check ran. Last check completed on Sunday, 03-03-2019, 18:07 (today), finding 488376000 errors. Duration: 17 hours, 54 minutes, 33 seconds. Average speed: 93.1 MB/sec Log has a fair few of these entries, then stopped logging Mar 3 08:40:58 tdm kernel: md: recovery thread: P corrected, sector=7814037848 Mar 3 08:40:58 tdm kernel: md: recovery thread: P corrected, sector=7814037856 Mar 3 08:40:58 tdm kernel: md: recovery thread: stopped logging Then we see Mar 3 18:07:53 tdm kernel: md: sync done. time=64472sec Mar 3 18:07:54 tdm kernel: md: recovery thread: completion status: 0 So now I'm trying to work out if it did correct those errors. I can't see a log entry or comment anywhere of the amount of errors it corrected. As 17hour Parity check is about 5 hours more than usual so it assumes it did a fair bit of extra work, but I'd like to see some confirmation.
  5. Thanks. It looks like you're right. Started a correcting check, all fine until the 4tb mark now is correcting errors at a rapid rate. As my biggest data disk is 4tb that seems in line with your thoughts.
  6. I'm assuming that was the parity rebuild, not the copy, that took a while as well. To rebuild the parity am I right that these are the correct steps: Unassign the parity drive, then start and stop the array, then reassign the parity drive and restart the array ?
  7. I used the parity swap procedure found here: https://wiki.unraid.net/The_parity_swap_procedure Moved the 4tb parity to a data drive then chose the new 6tb as the parity. I'm pretty sure I ran a full parity after the swap and remembered seeing the increase in speed to 194.5 MB/s which I assumed was due to a 6tb parity run knowing there'd be 2tb of nothing to check. But I could be mistaken. What would be the risks of another parity check and correcting errors vs rebuilding parity from scratch. I haven't noticed any file corruptions (yet).
  8. I've just noticed that since Jan 1st my parity checks have thrown me 480m errors. The last two monthly parity checks I just didn't click as I must have ignored that number as a date or something, Realised yesterday there were errors and now noticed it's been the last 2 monthly checks. With that size number out of nowhere, and not having experienced any issues with the server, could it be something other than real parity errors? Anything I can check or look at to see what the real state of my server is ? I'm more they are real errors with the parity check increasing by 4 hours. The only real change is that the Dec 02 run is when I installed a 6Tb parity disk. The last few checks: 2019-03-01, 12:14:33 12 hr, 14 min, 32 sec 136.2 MB/s OK 488376000 2019-02-01, 12:15:10 12 hr, 15 min, 9 sec 136.1 MB/s OK 488376000 2019-01-01, 12:15:14 12 hr, 15 min, 13 sec 136.0 MB/s OK 488376000 2018-12-02, 15:46:17 8 hr, 34 min, 21 sec 194.5 MB/s OK 0 2018-11-30, 04:53:20 8 hr, 36 min, 50 sec 129.0 MB/s OK 0 2018-11-01, 08:30:29 8 hr, 30 min, 28 sec 130.6 MB/s OK 0 2018-10-12, 06:25:34 8 hr, 33 min, 52 sec 129.8 MB/s OK 0
  9. OK, thanks. Looking back, the initial problems I had setting up this container look like they were because of this need to restart the USG router to get the port forwarding working again. Also explains why it "magically" started working for me. I imagine the router upgraded itself and rebooted.
  10. No. I deleted the old one and set it all up from scratch. Tried reinstalling a couple of other times too. I'll keep sniffing around and see if I can see anything
  11. Is anyone seeing huge CPU usage with this container? With no cameras connected, Zoneminder was pushing 2 of my 4 i5-2500 cores to around 80%. Stopping the container brings the CPU back to low single digits idle.
  12. No one else uses this combo of containers and seeing this problem? Myself and one other than I have seen so far. I'm surprised as they are both popular containers.
  13. Reporting this from the ngix thread where it didn't get any answers. I'm not sure where the problem is: I have an issue where when the ngix docker stops and restarts (nightly backup, etc) the port forwarding from my router (USG using the LSIO Unifi Docker) no longer works. I have to restart the Unifi container/controller for the port forwarding to work. Does anyone else have this combo of containers and experiencing the same problem ?
  14. I have an issue where when this docker stops and restarts (nightly backup, etc) the port forwarding from my router (USG using the LSIO Unifi Docker) no longer works. I have to restart the Unifi container/controller for the port forwarding to work. Does anyone else have this combo of containers and experiencing the same problem ?