Jump to content

tateburns

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by tateburns

  1. I just wanted to updated I've been running my Duplicacy backups from my Mac -> to unRAID over SFTP for the past week with no issues since I removed all plugins. Again, an unscientific approach so I don't know which was causing it. If I had to guess though it was the Dynamic Cache Dirs one
  2. Mounted boot thumb drive on Windows. No errors. Was able to ready it fine. Regarding out of memory warnings, you've got me on the trail of something. Like said, I didn't have many plugins installed beyond the Dynamix ones. However, I did have Dynamix Cache Dirs installed. I'm wondering if it's been running out of memory when I'm running my Duplicacy check/prune jobs from my Mac nightly? It seems improbable that I could fill up 8GB of memory with lists of files, but who knows? I've taken a very unscientific and scorched earth approach here and removed every single plugin, including the Dynamix Cache Dirs one. At this point I am only running the bare unRAID OS. I've fired up the jobs that I think might be killing the shares to see if it makes a difference. Thanks everyone for the suggestions so far.
  3. Wow that's a path I hadn't considered. I really appreciate the creative idea. I'll give it a shot and report back! Some other things if anyone takes a peek at my diagnostic logs. I'm logging into my unRAID from Duplicacy for the SFTP backups using the unRAID root account. Don't SecOps shame me too hard 😞
  4. Greetings, I really could use some help diagnoising what my problem might be. I'm running Duplicacy on my Mac to backup data to my unRAID over SFTP. I work in video production so these datasets I'm backing up are fairly large in size, or a least one of them. 3 volumes ony my Mac, 41TB (projects archive), 2.5TB (current editing project), and 500GB (Macintosh HD). The backup jobs run hourly. Duplicacy performs separate check and prune jobs nightly. It used to take about 7-10 days for unRAID shares to disppear from underneath /mnt/user. At a certian point during the week (hard to identify when) they'd vanish in both the GUI and when logged in via the terminal. See output below for ls -l on /mnt/ d????????? ? ? ? ? ? user/ A reboot would bring the shares back and I'd be good for another 7-10 days. Now the shares are disappearing nightly. I'm unable to make it through a full day of backup schedules. Based on the failure logs from Duplicacy, I'm guessing it is either the check or prune jobs that are killing them. My guess is that since the dataset has grown, the length of time it'd take for these jobs to kill off my shares has decreased. I had seen folks complaining that certain large rm commands could kill their shares, so maybe the prune is doing it? Here's what I've done to attempt to troubleshoot: Checkdisk on all disks = Clean Memtest (5 passes) = Clean NFS = Disabled Docker = Disabled Cache disk = Not in use for any shares hosting active datasets (I still have appdata on it from when I had a docker running) Upgraded unRAID = 6.11.5 Plugins = All upgraded, very few installed anyways I've done a lot of research on this issue and those items above seem to represent the most popular things to check. At this point I'm at a complete loss how to proceed. I've been an unRAID user for about 12+ years, have some other non-critical path servers for hosting various media. I'm thinking about moving to TrueNAS for this system, but really don't have the time to learn a new platform right now. I just need something stable and really don't need any features whatsoever. Just a dumping ground for data. Could someone take a look at my diagnostic log and let me know if there's something obvious I'm missing? I just output the logs right now while shares are gone. You'll probably see consistent errors about my UPS losing connection. I just haven't hooked up the cable to my UPS since I lost it when we moved recently. Thank you! dagobah-diagnostics-20230225-0847.zip
  5. Having some trouble getting out of the gate on this one. Could use some assistance if anyone could be so kind to lend a hand? Just reading through thread, it seems it's necessary to create system.properties, which I did. Once I restarted, some new data was written to that file. So seems like progress. However, when I open the webui, it still says "Update in Progress..." at the Unifi splash page. I've let it sit there for 10 minutes, restarted the docker a few times. No joy. Don't know if it's helpful, but here are the contents of /unifi-vdeo and the contents of system.properties root@unRAID:/mnt/user/appdata/unifi-video# ls -l total 12 drwxr-xr-x 1 root root 96 Feb 16 19:34 db/ -rw-r--r-- 1 root root 2204 Feb 16 19:03 keystore -rw-rw-rw- 1 root root 279 Feb 16 19:29 system.properties -rw-r--r-- 1 root root 32 Feb 16 19:03 truststore root@unRAID:/mnt/user/appdata/unifi-video# cat system.properties # unifi-video v3.6.1 #Thu Feb 16 19:29:58 CST 2017 is_default=true uuid=81b521b0-5f8c-4376-9b8c-2efcac4a00cf # app.http.port = 7080 # app.https.port = 7443 # ems.liveflv.port = 6666 # ems.livews.port = 7445 # ems.livewss.port = 7446 # ems.rtmp.port = 1935 # ems.rtsp.port = 7447 Here is the server.log, looks to be a Mongodb problem? This just repeats over and over. 2017-02-16T19:38:34.237-0600 git version: nogitversion 2017-02-16T19:38:34.237-0600 OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016 - not using --noprealloc in mongodb 1487295514.241 2017-02-16 19:38:34.241/CST: INFO MongoDb server starting, dbdir=/usr/lib/unifi-video/data/db, port=7441 in mongodb 1487295514.241 2017-02-16 19:38:34.241/CST: INFO MongoDB server started, acquiring client connection in app-event-bus-0 1487295514.348 2017-02-16 19:38:34.348/CST: INFO mongod has quit with rc: 14 in mongodb 1487295514.348 2017-02-16 19:38:34.348/CST: INFO MongoDb server stopped in mongodb Here is the Mongodb log, 2017-02-16T19:45:20.540-0600 [initandlisten] SEVERE: Got signal: 6 (Aborted). Backtrace:0xedb3e9 0xeda3a5 0x2b0ae65ce4b0 0x2b0ae65ce428 0x2b0ae65d002a 0xe4a213 0xe7039b 0x8869fa 0x886f3a 0x88ea86 0x87d184 0x61f92f 0x620903 0x5e943c 0x2b0ae65b9830 0x61a2d9 bin/mongod(_ZN5mongo15printStackTraceERSo+0x39) [0xedb3e9] bin/mongod() [0xeda3a5] /lib/x86_64-linux-gnu/libc.so.6(+0x354b0) [0x2b0ae65ce4b0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38) [0x2b0ae65ce428] /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x2b0ae65d002a] bin/mongod(_ZN5mongo13fassertFailedEi+0xc3) [0xe4a213] bin/mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x29b) [0xe7039b] bin/mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x22a) [0x8869fa] bin/mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x2a) [0x886f3a] bin/mongod(_ZN5mongo3dur16preallocateFilesEv+0x966) [0x88ea86] bin/mongod(_ZN5mongo3dur7startupEv+0x74) [0x87d184] bin/mongod(_ZN5mongo14_initAndListenEi+0x76f) [0x61f92f] bin/mongod(_ZN5mongo13initAndListenEi+0x23) [0x620903] bin/mongod(main+0x23c) [0x5e943c] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x2b0ae65b9830] bin/mongod(_start+0x29) [0x61a2d9] 2017-02-16T19:45:30.944-0600 ***** SERVER RESTARTED ***** 2017-02-16T19:45:30.945-0600 [initandlisten] MongoDB starting : pid=272 port=7441 dbpath=/usr/lib/unifi-video/data/db 64-bit host=553fd764528a 2017-02-16T19:45:30.945-0600 [initandlisten] db version v2.6.10 2017-02-16T19:45:30.945-0600 [initandlisten] git version: nogitversion 2017-02-16T19:45:30.945-0600 [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016 2017-02-16T19:45:30.945-0600 [initandlisten] build info: Linux lgw01-12 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 BOOST_LIB_VERSION=1_58 2017-02-16T19:45:30.945-0600 [initandlisten] allocator: tcmalloc 2017-02-16T19:45:30.945-0600 [initandlisten] options: { net: { bindIp: "127.0.0.1", http: { enabled: false }, port: 7441 }, storage: { dbPath: "/usr/lib/unifi-video/data/db", journal: { enabled: true }, smallFiles: true }, systemLog: { destination: "file", logAppend: true, path: "logs/mongod.log" } } 2017-02-16T19:45:30.962-0600 [initandlisten] journal dir=/usr/lib/unifi-video/data/db/journal 2017-02-16T19:45:30.962-0600 [initandlisten] recover : no journal files present, no recovery needed 2017-02-16T19:45:30.962-0600 [initandlisten] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x384a000 errno:22 Invalid argument 2017-02-16T19:45:30.962-0600 [initandlisten] Fatal Assertion 13515 2017-02-16T19:45:30.964-0600 [initandlisten] 0xedb3e9 0xe6fb3f 0xe4a1c1 0xe7039b 0x8869fa 0x886f3a 0x88ea86 0x87d184 0x61f92f 0x620903 0x5e943c 0x2b9099f49830 0x61a2d9 bin/mongod(_ZN5mongo15printStackTraceERSo+0x39) [0xedb3e9] bin/mongod(_ZN5mongo10logContextEPKc+0x21f) [0xe6fb3f] bin/mongod(_ZN5mongo13fassertFailedEi+0x71) [0xe4a1c1] bin/mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x29b) [0xe7039b] bin/mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x22a) [0x8869fa] bin/mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x2a) [0x886f3a] bin/mongod(_ZN5mongo3dur16preallocateFilesEv+0x966) [0x88ea86] bin/mongod(_ZN5mongo3dur7startupEv+0x74) [0x87d184] bin/mongod(_ZN5mongo14_initAndListenEi+0x76f) [0x61f92f] bin/mongod(_ZN5mongo13initAndListenEi+0x23) [0x620903] bin/mongod(main+0x23c) [0x5e943c] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x2b9099f49830] bin/mongod(_start+0x29) [0x61a2d9] 2017-02-16T19:45:30.964-0600 [initandlisten] ***aborting after fassert() failure
×
×
  • Create New...