quinnjudge

Members
  • Posts

    15
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

quinnjudge's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I removed the bandwidth limit in CBB, and the backup job was successful; no errors on the files like before - thanks!!! So, is this an issue with CBB or with BackBlaze B2?
  2. Yes; my thought was to ensure a backup does not interfere with web conferencing software (I'm still working full-time from my home office)...I have the limit for cloud storage set to approx. 80% of available upload bandwidth.
  3. Hello, hoping to get a little direction... I've been using this container for a few months to back up to BackBlaze B2, and it has been terrific! Recently a couple of files have been giving me trouble, and I'm not sure where to start as far as troubleshooting. When I try and back up files generated from CA Appdata Backup / Restore v2, get the following message in CBB: SSL_write() returned SYSCALL, errno = 32 and the backup job fails. When I remove these files from the backup job it runs successfully, but adding the files back in to the backup job will cause the error again. Any idea where the problem may lie, or who to start the right conversation with? Thanks in advance!
  4. Good news - rebuild completed without errors. Bad news - now I have a reported error on the same disk: Sep 3 21:28:14 Proteus kernel: mdcmd (2): import 1 sdf 64 2930266532 0 WDC_WD30EFRX-68EUZN0_WD-WMC4N0862856 Sep 3 21:28:14 Proteus kernel: md: import disk1: (sdf) WDC_WD30EFRX-68EUZN0_WD-WMC4N0862856 size: 2930266532 Sep 3 21:28:49 Proteus emhttpd: shcmd (886): /usr/local/sbin/set_ncq sdf 1 Sep 3 21:28:49 Proteus emhttpd: shcmd (887): echo 128 > /sys/block/sdf/queue/nr_requests Sep 5 20:57:41 Proteus kernel: sd 9:0:2:0: [sdf] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Sep 5 20:57:41 Proteus kernel: sd 9:0:2:0: [sdf] tag#0 Sense Key : 0x3 [current] Sep 5 20:57:41 Proteus kernel: sd 9:0:2:0: [sdf] tag#0 ASC=0x11 ASCQ=0x0 Sep 5 20:57:41 Proteus kernel: sd 9:0:2:0: [sdf] tag#0 CDB: opcode=0x88 88 00 00 00 00 01 2f 2a 88 98 00 00 00 08 00 00 Sep 5 20:57:41 Proteus kernel: print_req_error: critical medium error, dev sdf, sector 5086283928 I did a short SMART test against the drive right before I started the rebuild (came back successful)...next steps? proteus-diagnostics-20180905-2155.zip
  5. Server restarted, disk rebuilding...looks like I have ~8 hours until rebuild is complete; I'll go grab some popcorn and cross my fingers Thank you @trurl and @John_M for your quick help, it is appreciated!
  6. Thanks for the quick reply! I don't have a spare disk, so I'll have to rebuild the existing one...I'll shut down, check the connections, and bring the server back up...can you point me to the rebuild procedure? (having a spare sitting around is on my to-do list, lol!) I do have good backups; just did a test restore
  7. Hi, hoping to get some advice/next steps - my server just completed a successful parity check this morning (0 errors), but less than 15 hours later, one of my drives became disabled (red 'X'), and I'm seeing the following in the disk log: Sep 3 16:07:16 Proteus kernel: sd 1:0:1:0: [sde] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Sep 3 16:07:16 Proteus kernel: sd 1:0:1:0: [sde] tag#2 Sense Key : 0x5 [current] Sep 3 16:07:16 Proteus kernel: sd 1:0:1:0: [sde] tag#2 ASC=0x21 ASCQ=0x0 Sep 3 16:07:16 Proteus kernel: sd 1:0:1:0: [sde] tag#2 CDB: opcode=0x8a 8a 00 00 00 00 00 ae a9 eb c8 00 00 00 08 00 00 Sep 3 16:07:16 Proteus kernel: print_req_error: critical target error, dev sde, sector 2930371528 Sep 3 16:07:16 Proteus kernel: print_req_error: critical target error, dev sde, sector 2930371528 What are my next steps for troubleshooting/repair? Is the disk toast, or should I attempt a repair and put it back into service? Thanks! proteus-diagnostics-20180903-1956.zip
  8. Looks like that fixed it, and everything is back up and running properly - thanks for your help!
  9. Thanks for the help so far, but it seems there may be deeper problems... After wiping/reformatting the cache pool, I am now unable to start the VM Manager, getting the error, "error : virNetSocketNewListenTCP:343 : Unable to resolve address '127.0.0.1' service '16509': Address family for hostname not supported". I am starting to wonder if there is something deeper that is causing issues...I guess I don't know if this is a completely random coincidence, or is somehow related to the original corruption of the cache pool... I have rebuilt the cache pool and recovered all of my docker containers. I have a (believed good) copy of all of the .img files for my VMs, and a backup of the libvirt.img file from a week ago. I am also attaching a new diagnostics file...any help with this is greatly appreciated! proteus-diagnostics-20180527-2158.zip
  10. Update: Fix Common Problems just alerted me: 1. Unable to write to cache - Drive mounted read-only or completely full (cache was nowhere near full prior to problems) 2. Call Traces found on your server - Your server has issued one or more call traces. This could be caused by a Kernel Issue, Bad Memory, etc. You should post your diagnostics and ask for assistance on the unRaid forums
  11. I was moving some files around on my hard drives when all of a sudden I noticed none of my containers were running. I went into the Docker tab, but all I see is the error, "Docker Service failed to start". I tried restarting the service manually, even rebooted the server - service won't start. Upon reboot, a parity check has also kicked off, due to an unclean shutdown (I used the 'Reboot' button in the console as I normally do) Including my diagnostics file from after the reboot, when parity check kicked off...any help would be appreciated! proteus-diagnostics-20180526-2128.zip
  12. Thanks for the quick response! I read through that topic (more like skimmed ), and I think I'll just bypass the whole Marvell headache and look into a different card. Looks like I can get a Dell PERC H310 on ebay for pretty cheap and flash it to IT mode... Anyway...thanks!
  13. I am trying to add a SATA controller card (IOCrest SI-PEX40064) to UnRAID to expand my available SATA connections. The motherboard (Supermicro X11-SSM-F-O) appears to recognize it, but I can not see the attached SATA drive in the Unassigned Devices list. I'm a little new to UnRAID, so I'm not sure how to start troubleshooting this...any help would be appreciated...diagnostics file attached... Thanks in advance! -Quinn proteus-diagnostics-20171206-1503.zip
  14. As soon as I read this, I saw my file copy flip over to disk 2...I guess I misunderstood how high water worked (thought it was based on % filled, on a per-drive basis)...thank you for the quick response! Much appreciated!
  15. Hopefully this is something I am overlooking and easily fixable, but I don't understand why this is happening... I am copying files over the network to my UnRAID server, and it doesn't seem to be following the "high water" rule. All shares are set to "high water" and "automatically split any directory as required". Disk 1 is already pretty full, with disk 4 under the 70% fill line...why do file copies still keep filling disk 1 (see attached)? Both the Mover and straight copies to the shares themselves ("use cache" set to 'No') seem to do the same thing. I'm new to UnRAID (still running on the trial license), so please let me know if I need to include more information - thanks! -Quinn proteus-diagnostics-20171010-1435.zip