tateburns

Members
  • Posts

    30
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

tateburns's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I just wanted to updated I've been running my Duplicacy backups from my Mac -> to unRAID over SFTP for the past week with no issues since I removed all plugins. Again, an unscientific approach so I don't know which was causing it. If I had to guess though it was the Dynamic Cache Dirs one
  2. Mounted boot thumb drive on Windows. No errors. Was able to ready it fine. Regarding out of memory warnings, you've got me on the trail of something. Like said, I didn't have many plugins installed beyond the Dynamix ones. However, I did have Dynamix Cache Dirs installed. I'm wondering if it's been running out of memory when I'm running my Duplicacy check/prune jobs from my Mac nightly? It seems improbable that I could fill up 8GB of memory with lists of files, but who knows? I've taken a very unscientific and scorched earth approach here and removed every single plugin, including the Dynamix Cache Dirs one. At this point I am only running the bare unRAID OS. I've fired up the jobs that I think might be killing the shares to see if it makes a difference. Thanks everyone for the suggestions so far.
  3. Wow that's a path I hadn't considered. I really appreciate the creative idea. I'll give it a shot and report back! Some other things if anyone takes a peek at my diagnostic logs. I'm logging into my unRAID from Duplicacy for the SFTP backups using the unRAID root account. Don't SecOps shame me too hard 😞
  4. Greetings, I really could use some help diagnoising what my problem might be. I'm running Duplicacy on my Mac to backup data to my unRAID over SFTP. I work in video production so these datasets I'm backing up are fairly large in size, or a least one of them. 3 volumes ony my Mac, 41TB (projects archive), 2.5TB (current editing project), and 500GB (Macintosh HD). The backup jobs run hourly. Duplicacy performs separate check and prune jobs nightly. It used to take about 7-10 days for unRAID shares to disppear from underneath /mnt/user. At a certian point during the week (hard to identify when) they'd vanish in both the GUI and when logged in via the terminal. See output below for ls -l on /mnt/ d????????? ? ? ? ? ? user/ A reboot would bring the shares back and I'd be good for another 7-10 days. Now the shares are disappearing nightly. I'm unable to make it through a full day of backup schedules. Based on the failure logs from Duplicacy, I'm guessing it is either the check or prune jobs that are killing them. My guess is that since the dataset has grown, the length of time it'd take for these jobs to kill off my shares has decreased. I had seen folks complaining that certain large rm commands could kill their shares, so maybe the prune is doing it? Here's what I've done to attempt to troubleshoot: Checkdisk on all disks = Clean Memtest (5 passes) = Clean NFS = Disabled Docker = Disabled Cache disk = Not in use for any shares hosting active datasets (I still have appdata on it from when I had a docker running) Upgraded unRAID = 6.11.5 Plugins = All upgraded, very few installed anyways I've done a lot of research on this issue and those items above seem to represent the most popular things to check. At this point I'm at a complete loss how to proceed. I've been an unRAID user for about 12+ years, have some other non-critical path servers for hosting various media. I'm thinking about moving to TrueNAS for this system, but really don't have the time to learn a new platform right now. I just need something stable and really don't need any features whatsoever. Just a dumping ground for data. Could someone take a look at my diagnostic log and let me know if there's something obvious I'm missing? I just output the logs right now while shares are gone. You'll probably see consistent errors about my UPS losing connection. I just haven't hooked up the cable to my UPS since I lost it when we moved recently. Thank you! dagobah-diagnostics-20230225-0847.zip
  5. Having some trouble getting out of the gate on this one. Could use some assistance if anyone could be so kind to lend a hand? Just reading through thread, it seems it's necessary to create system.properties, which I did. Once I restarted, some new data was written to that file. So seems like progress. However, when I open the webui, it still says "Update in Progress..." at the Unifi splash page. I've let it sit there for 10 minutes, restarted the docker a few times. No joy. Don't know if it's helpful, but here are the contents of /unifi-vdeo and the contents of system.properties root@unRAID:/mnt/user/appdata/unifi-video# ls -l total 12 drwxr-xr-x 1 root root 96 Feb 16 19:34 db/ -rw-r--r-- 1 root root 2204 Feb 16 19:03 keystore -rw-rw-rw- 1 root root 279 Feb 16 19:29 system.properties -rw-r--r-- 1 root root 32 Feb 16 19:03 truststore root@unRAID:/mnt/user/appdata/unifi-video# cat system.properties # unifi-video v3.6.1 #Thu Feb 16 19:29:58 CST 2017 is_default=true uuid=81b521b0-5f8c-4376-9b8c-2efcac4a00cf # app.http.port = 7080 # app.https.port = 7443 # ems.liveflv.port = 6666 # ems.livews.port = 7445 # ems.livewss.port = 7446 # ems.rtmp.port = 1935 # ems.rtsp.port = 7447 Here is the server.log, looks to be a Mongodb problem? This just repeats over and over. 2017-02-16T19:38:34.237-0600 git version: nogitversion 2017-02-16T19:38:34.237-0600 OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016 - not using --noprealloc in mongodb 1487295514.241 2017-02-16 19:38:34.241/CST: INFO MongoDb server starting, dbdir=/usr/lib/unifi-video/data/db, port=7441 in mongodb 1487295514.241 2017-02-16 19:38:34.241/CST: INFO MongoDB server started, acquiring client connection in app-event-bus-0 1487295514.348 2017-02-16 19:38:34.348/CST: INFO mongod has quit with rc: 14 in mongodb 1487295514.348 2017-02-16 19:38:34.348/CST: INFO MongoDb server stopped in mongodb Here is the Mongodb log, 2017-02-16T19:45:20.540-0600 [initandlisten] SEVERE: Got signal: 6 (Aborted). Backtrace:0xedb3e9 0xeda3a5 0x2b0ae65ce4b0 0x2b0ae65ce428 0x2b0ae65d002a 0xe4a213 0xe7039b 0x8869fa 0x886f3a 0x88ea86 0x87d184 0x61f92f 0x620903 0x5e943c 0x2b0ae65b9830 0x61a2d9 bin/mongod(_ZN5mongo15printStackTraceERSo+0x39) [0xedb3e9] bin/mongod() [0xeda3a5] /lib/x86_64-linux-gnu/libc.so.6(+0x354b0) [0x2b0ae65ce4b0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38) [0x2b0ae65ce428] /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x2b0ae65d002a] bin/mongod(_ZN5mongo13fassertFailedEi+0xc3) [0xe4a213] bin/mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x29b) [0xe7039b] bin/mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x22a) [0x8869fa] bin/mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x2a) [0x886f3a] bin/mongod(_ZN5mongo3dur16preallocateFilesEv+0x966) [0x88ea86] bin/mongod(_ZN5mongo3dur7startupEv+0x74) [0x87d184] bin/mongod(_ZN5mongo14_initAndListenEi+0x76f) [0x61f92f] bin/mongod(_ZN5mongo13initAndListenEi+0x23) [0x620903] bin/mongod(main+0x23c) [0x5e943c] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x2b0ae65b9830] bin/mongod(_start+0x29) [0x61a2d9] 2017-02-16T19:45:30.944-0600 ***** SERVER RESTARTED ***** 2017-02-16T19:45:30.945-0600 [initandlisten] MongoDB starting : pid=272 port=7441 dbpath=/usr/lib/unifi-video/data/db 64-bit host=553fd764528a 2017-02-16T19:45:30.945-0600 [initandlisten] db version v2.6.10 2017-02-16T19:45:30.945-0600 [initandlisten] git version: nogitversion 2017-02-16T19:45:30.945-0600 [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016 2017-02-16T19:45:30.945-0600 [initandlisten] build info: Linux lgw01-12 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 BOOST_LIB_VERSION=1_58 2017-02-16T19:45:30.945-0600 [initandlisten] allocator: tcmalloc 2017-02-16T19:45:30.945-0600 [initandlisten] options: { net: { bindIp: "127.0.0.1", http: { enabled: false }, port: 7441 }, storage: { dbPath: "/usr/lib/unifi-video/data/db", journal: { enabled: true }, smallFiles: true }, systemLog: { destination: "file", logAppend: true, path: "logs/mongod.log" } } 2017-02-16T19:45:30.962-0600 [initandlisten] journal dir=/usr/lib/unifi-video/data/db/journal 2017-02-16T19:45:30.962-0600 [initandlisten] recover : no journal files present, no recovery needed 2017-02-16T19:45:30.962-0600 [initandlisten] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x384a000 errno:22 Invalid argument 2017-02-16T19:45:30.962-0600 [initandlisten] Fatal Assertion 13515 2017-02-16T19:45:30.964-0600 [initandlisten] 0xedb3e9 0xe6fb3f 0xe4a1c1 0xe7039b 0x8869fa 0x886f3a 0x88ea86 0x87d184 0x61f92f 0x620903 0x5e943c 0x2b9099f49830 0x61a2d9 bin/mongod(_ZN5mongo15printStackTraceERSo+0x39) [0xedb3e9] bin/mongod(_ZN5mongo10logContextEPKc+0x21f) [0xe6fb3f] bin/mongod(_ZN5mongo13fassertFailedEi+0x71) [0xe4a1c1] bin/mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x29b) [0xe7039b] bin/mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x22a) [0x8869fa] bin/mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x2a) [0x886f3a] bin/mongod(_ZN5mongo3dur16preallocateFilesEv+0x966) [0x88ea86] bin/mongod(_ZN5mongo3dur7startupEv+0x74) [0x87d184] bin/mongod(_ZN5mongo14_initAndListenEi+0x76f) [0x61f92f] bin/mongod(_ZN5mongo13initAndListenEi+0x23) [0x620903] bin/mongod(main+0x23c) [0x5e943c] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x2b9099f49830] bin/mongod(_start+0x29) [0x61a2d9] 2017-02-16T19:45:30.964-0600 [initandlisten] ***aborting after fassert() failure
  6. That reminds me, I've got a drive I need to file an RMA for as well. It's definitely worth the careful effort because after this upgrade I'll be set for 2014. Hoping that 4TB drives start coming down in price and late next year I'll scoop up some on a black Friday deal. I'm not extremely interested in growing my array out, but rather up to reduce risk. At least until unRAID supports dual parity (which may or may not ever happen). Thanks for chatting it up with me. Good luck with the upgrades!
  7. Argh you're right! I was hoping for someone to tell me, "Hey don't worry about it you'll be fine." To be honest, though, I definitely see how minimizing risk making sure I have fresh parity while rebuilding a disk is valuable. Besides, running a preclear on a 3TB drive takes a long time. It's not much less than the time to rebuild a drive and resync parity so I'll just bite the bullet and check parity after every drive rebuild. Thanks for talking me out of doing something dumb guys!
  8. Greetings, hopefully an easy question. My array consists of 8 disks total. 4x of them are 3TB and 4x of them are 2TB. I just received my new drives in the mail and I'll be upgrading the last of the 2TB drives to 3TB. Last year I did this for my 1TB drives and before I started upgrading anything I ran a parity check. After each individual disk rebuild I also ran a parity check before upgrading the next drive. The parity checks take about a day on my array, so this added an extra 4-5 days onto the whole upgrade process. My questions is, after running that first initial parity check and disabling the mover script so nothing new is added to the array, is it really necessary to run a parity check after upgrading and rebuilding each disk?
  9. Got it figured out. I blew away my go script after the upgrade thinking I didn't need it anymore. Apparently you do just to start the management utility. #!/bin/bash /usr/local/sbin/emhttp & Seems pretty silly you need a 1 line startup script and they couldn't build that in somewhere else
  10. Tried renaming to /boot/config/plugins.old and rebooting and still no joy I did notice that Plex went ahead and recreated a /boot/config/plugins folder on boot. Looks like that was still sitting in /boot/extra so I removed that package and rebooted. No luck there either. Hooked a monitor up to my server and watched it boot, everything looks clean. Comes up fine and is just sitting patiently at the unRAID login
  11. Greetings, I upgraded from 4.7 to 5.0rc8 this afternoon. I got all of my apps (Sickbeard, sabnzbd, Couchpotato, Plex) running with the new plugin installs. Got SimpleFeatures installed as well. I rebooted the server once I got home to install a new drive and start preclearing. To do this, I took the array offline and initiated the clean shutdown. Once I booted the server back up I can now no longer access http://tower/main or any of my apps. What's strange is that is looks like everything starts successfully in Syslog. I launched unMenu to get some more insight and sure enough the array is still offline. Can someone help me jumpstart things? Not quite sure what to do at this point. syslog-2012-11-30.txt
  12. Writing via user shares. Enabling the cache disk for the shares appears to have resolved the issue now! Still have more testing but things are looking good Update: Verified everything is fixed now. Not sure why having the cache drive disabled for some shares would cause this behavior but enabling it has resolved the issue.
  13. I'm sorry to junk up the forums, but I had originally started this thread over in the Applications sub-forum thinking it was a problem with Sickbeard or Sabnzbd. I've now verified it is NOT so I want to post in an area where my issue will get more visibility. Whenever I copy a new file into a directory the respective directory's date modified does not change. If I modify a file in said directory, then the date modified does update appropriately. I'm testing this from my Windows box to my unRAID via SMB shares. However, it doesn't matter if I perform these actions from my Mac Mini either. Likewise, the date modified does not update when new shows are dumped from Sickbeard. Has anyone run into this before?
  14. Great news! Can you please post the exact model number RAM you used? I know that SuperMicro boards can be very picky.
  15. Thank you for the responses. This is very helpful especially considering I built the server in a haste and did a terrible job with cable management. It's nice to know I can tear it down and rebuild without having to worry which drives plug into which SATA port.