weirdcrap

Members
  • Posts

    454
  • Joined

  • Last visited

Everything posted by weirdcrap

  1. I have tried configuring it in unmenu as usb/lp0 & lp1 to see if that would fix the issue and still no dice. Nothing from p910nd is showing up in the syslog however as I noted above the process is indeed running. I checked the print processor settings and it is set to use raw so that isn't the issue either. I'm not sure what else to check, does anyone have this successfully working on 5.0.5?
  2. Anyone have some pointers? I would rather not do this through trial and error if it can be avoided. From what I understand about unraid it unpacks a fresh copy of the kernel at every boot so would creating a new folder in /dev/ even work?
  3. I installed the package through unmenu and followed the instructions in the main p910nd thread for configuring the rest. I have the port and everything setup on Windows according to the thread and have tried to print to the server however nothing is showing up in the syslog so I clearly don't have the server setup correctly. The only thing that doesn't seem to be in order is the dev/usblp* folder. In my dev folder I have: "/dev/usb/lp0" & "/dev/usb/lp1" Is the fix as simple as creating usblp0 in the dev folder and symlinking it to the appropriate unraid created directory (ie. "dev/usb/lp0")? Relevant Information: I am running unraid v5.0.5 unMenu Version 1.6 Revision: 278 The printer is a Canon Pixma MP830. I installed the package through unMenu's package manger setting: -the printer port as usblp0 -Bidirectional support YES -RC script Installed I did not add the two lines from Dase's instructions to my go file as I did not think I would need them since I installed the package through unmenu. Here is my syslog output from connecting the printer (you can find the full log attached below). I believe usblp1 is the memory card reader or something along those lines since underneath it says "Direct-Access Canon MP830Storage" These are the commands and their outputs that were suggested in the original thread: strings /boot/packages/p910nd-0.93/p910nd | grep subsys /var/lock/subsys/p910%cd ls -ld /var/log/subsys drwxrwxrwx 2 root root 40 2014-06-29 17:31 /var/log/subsys/ ls -ld /var/lock/subsys drwxr-xr-x 2 root root 0 2014-06-29 17:31 /var/lock/subsys/ ls -l /var/lock/subsys total 0 -rw-rw-rw- 1 root root 0 2014-06-29 17:31 apcupsd -rw-r--r-- 1 root root 0 2014-06-29 17:31 p9100d ls -l /dev/usblp* /bin/ls: cannot access /dev/usblp*: No such file or directory ps -ef | grep p910 root 2243 1 0 17:31 ? 00:00:00 /usr/sbin/p9100d -f /dev/usblp0 -b root 3460 3292 0 17:33 pts/0 00:00:00 grep p910 Thanks for the help, -Chris syslog-2014-06-29.txt
  4. EDIT: Moved to its own thread in hopes of getting a response. syslog-2014-06-29.txt
  5. -Thanks for the info. At the time I took the screenshots there was less than 1000 lines in the full downloaded log, so the logger restarting isn't what caused it but either way it seems to be fixed now. I rebooted yesterday to no avail but this morning after playing around with snap I rebooted again and the log page seems to be working fine.- -I figured the ssmtp issue I was having had something to do with that after reading the thread I linked to. I just went ahead and had keepass exclude any "special" shell characters from the password generator for my email address. Thanks for the quick response. @joelones: a quick search of the forum brought up a script (http://lime-technology.com/forum/index.php?topic=10491) that attempts to "secure" unmenu by setting up a username/password requirement and killing the unmenu interface after x amount of time. The thread is old so it probably won't work out of the box but if you are so inclined you could probably rework it to fit your needs.
  6. A couple questions after upgrading to the new unraid (v5.0.5)/unmenu (1.6) versions. 1. On the first page I noticed your screenshot showing the user script page buttons are themed to match the rest of the interface. However on my install they are just plain gray buttons. Was they styling removed since that screen was posted or did I bugger my install somehow? 2. I went to view my syslog using the syslog page and noticed that the full log is not being displayed. I see the most recent lines from the main page (mainlog.png) however I am missing a number of lines when I view the log from the syslog page (TruncatedLog.png). What would cause this? I made sure all the filtering options were enabled so that isn't the issue. 3. When configuring the ssmtp plugin my password for my email account will not save. I hit edit, enter the password, save, and click reinstall with new values and when the page reloads the password field just shows "your_password" again. Attempting to send a test message failed with an authorization error. Are there certain characters that aren't supported by the plugin like the "$" as discussed in this (http://lime-technology.com/forum/index.php?topic=13799) thread? The password works fine for logging into the account through the web browser. EDIT: upon further investigation it appears that the ssmtp plugin does not play nice with quotation marks. When I hit save in the plugin manager, before installing with the new values, it will show the password up to the point where the " is and truncate the rest. Upon reinstalling the password value is changed back to "your_password". It will save just fine when I use something simple such as "testpassword". I guess for this one I will just have to change my password. 4. Does unmenu still need the entry in the go script to start with the server? I have updated and restarted unmenu without success in an attempt to fix these issues. This is the info from my unmenu about section:
  7. After receiving some advice from GaryCase I decided the smartest course of action for me to take when upgrading from 4.7 to 5.0.5 would be to start out with a fresh install of unraid. I just backed up the data from the server today so if I mess something up all is not lost however I would much rather avoid a catastrophe all together if possible. My basic plan of attack so far is: 1. Backup current flash drive, note disk positions, make note of all pertinent plugin configuration values. 2. Wipe Flash drive, copy v5.0.5 and run make_bootable. 3. Edit the ident.cfg and network.cfg files to the appropriate values so I can access the server via the web management utility. 4. Eject the flash drive, plug it back into the server and boot. 5. Assign my drives. From what Gary said as long as my cache and parity drives are assigned correctly data drive order doesn't matter. -Now when it comes to user shares will unraid recognize the shares I already have on my disks and populate the user share page accordingly? Or will I need to define them before assigning the disks back to the array? -Is there anything else important I should know or consider when it comes to starting out with a fresh flash drive and a live array like this? I'm hoping to make a tutorial on how to do this once it is all said and done so other users can benefit from what I've learned.
  8. Thanks. I believe I discovered what my crash was caused by. It was caused by the swap file plugin for unmenu. The reason it created such a mess was because I had it set to create the swap on my cache drive however I guess when I moved the server the other week the cable for the cache drive must have wiggled loose enough to lose a connection (the card its plugged into has really loose connectors). I'm surprised unraid didn't make a big deal about a drive being missing even though it's not part of the array. Seems like something you would want to be informed of.
  9. Yea that would probably prove the least problematic. I'm assuming the data drives have to be in the exact same order as well?
  10. Thanks for the advice. I think for now I would rather just shut it down and unplug the two spares to avoid any further issues until I have the time to upgrade to v5.0.5. I have been slacking on my backups since I lent my friend my external dock so I would hate to have anything happen without having a recent backup. Is http://lime-technology.com/wiki/index.php/Migrating_from_unRAID_4.7_to_unRAID_5.0 still the proper procedure for upgrading to 5.0.5?
  11. I am using unraid 4.7 so I was unsure if this was the proper place however I don't see a 4.7 support area. My system log is attached. Alright first some background info: I have two drives hooked up in my server that are not part of the array, I figure if I have a disk go down or need to add space on the fly I can just stop the array and add one of the new disks and be on my way as opposed to having to unplug everything and take it to my workstation etc... Well these drives never spin down--see this thread: http://lime-technology.com/forum/index.php?topic=6379.0 in short they are being spun up each time unmenu checks their status--so when i start up the server I telnet in and run hdparm -Y /dev/sdb and hdparm -Y /dev/sdc to spin them down and as long as I'm not in the unmenu interface for long enough to let it query the drive status they stay spun down and everything is kosher. However today after a restart I went to do my usual spin down routine and noticed the server was returning my hdparm requests slower than usual. So I went to unmenu and clicked disk management and that is when all hell breaks loose (see attachment for full syslog). UNMENU becomes unresponsive however I was still able to access telnet and I attempted to issue a clean power down. The monitor showed that it was attempting to power down however it hung and I eventually ended up turning it off by holding down the power button. the first two lines below show me jumping in on telnet to see what exactly was going on. the rest of the sample below is just errors. Apr 5 12:58:33 VOID in.telnetd[17821]: connect from 192.168.0.253 (192.168.0.253) Apr 5 12:58:41 VOID login[17822]: ROOT LOGIN on `pts/0' from `192.168.0.253' Apr 5 12:59:25 VOID kernel: sas: command 0xf62d4b40, task 0xc3d1e000, timed out: BLK_EH_NOT_HANDLED Apr 5 12:59:25 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 12:59:25 VOID kernel: sas: trying to find task 0xc3d1e000 Apr 5 12:59:25 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e000 Apr 5 12:59:25 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 12:59:25 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e000 Apr 5 12:59:25 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 12:59:25 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e000 failed to abort Apr 5 12:59:25 VOID kernel: sas: task 0xc3d1e000 is not at LU: I_T recover Apr 5 12:59:25 VOID kernel: sas: I_T nexus reset for dev 0500000000000000 Apr 5 12:59:25 VOID kernel: sas: I_T 0500000000000000 recovered Apr 5 12:59:25 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 12:59:35 VOID kernel: sas: command 0xc3ea1900, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 12:59:35 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 12:59:35 VOID kernel: sas: trying to find task 0xc3d1e140 Apr 5 12:59:35 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140 Apr 5 12:59:35 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 12:59:35 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140 Apr 5 12:59:35 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 12:59:35 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort Apr 5 12:59:35 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover Apr 5 12:59:35 VOID kernel: sas: I_T nexus reset for dev 0400000000000000 Apr 5 12:59:35 VOID kernel: sas: I_T 0400000000000000 recovered Apr 5 12:59:35 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:00:10 VOID kernel: sas: command 0xf6253180, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:00:10 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 13:00:10 VOID kernel: sas: trying to find task 0xc3d1e140 Apr 5 13:00:10 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140 Apr 5 13:00:10 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:00:10 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140 Apr 5 13:00:10 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:00:10 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort Apr 5 13:00:10 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover Apr 5 13:00:10 VOID kernel: sas: I_T nexus reset for dev 0400000000000000 Apr 5 13:00:10 VOID kernel: sas: I_T 0400000000000000 recovered Apr 5 13:00:10 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:00:18 VOID kernel: sas: command 0xc4467c00, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:00:18 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 13:00:18 VOID kernel: sas: trying to find task 0xc3d1e140 Apr 5 13:00:18 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140 Apr 5 13:00:18 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:00:18 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140 Apr 5 13:00:18 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:00:18 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort Apr 5 13:00:18 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover Apr 5 13:00:18 VOID kernel: sas: I_T nexus reset for dev 0500000000000000 Apr 5 13:00:18 VOID kernel: sas: I_T 0500000000000000 recovered Apr 5 13:00:18 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:00:26 VOID kernel: sas: command 0xf6253600, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:00:26 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 13:00:26 VOID kernel: sas: trying to find task 0xc3d1e140 Apr 5 13:00:26 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140 Apr 5 13:00:26 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:00:26 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140 Apr 5 13:00:26 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:00:26 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort Apr 5 13:00:26 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover Apr 5 13:00:26 VOID kernel: sas: I_T nexus reset for dev 0400000000000000 Apr 5 13:00:26 VOID kernel: sas: I_T 0400000000000000 recovered Apr 5 13:00:26 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:04:04 VOID unraid-swapfile[18801]: Initiating unRAID swap-file. Apr 5 13:04:18 VOID kernel: NTFS driver 2.1.29 [Flags: R/O MODULE]. Apr 5 13:04:25 VOID kernel: sas: command 0xe6873cc0, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:04:25 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 13:04:25 VOID kernel: sas: trying to find task 0xc3d1e140 Apr 5 13:04:25 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140 Apr 5 13:04:25 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:04:25 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140 Apr 5 13:04:25 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:04:25 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort Apr 5 13:04:25 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover Apr 5 13:04:25 VOID kernel: sas: I_T nexus reset for dev 0400000000000000 Apr 5 13:04:25 VOID kernel: sas: I_T 0400000000000000 recovered Apr 5 13:04:25 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:04:56 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:04:56 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 13:04:56 VOID kernel: sas: trying to find task 0xc3d1e140 Apr 5 13:04:56 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140 Apr 5 13:04:56 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:04:56 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140 Apr 5 13:04:56 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:04:56 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort Apr 5 13:04:56 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover Apr 5 13:04:56 VOID kernel: sas: I_T nexus reset for dev 0400000000000000 Apr 5 13:04:56 VOID kernel: sas: I_T 0400000000000000 recovered Apr 5 13:04:56 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:05:02 VOID kernel: ------------[ cut here ]------------ Apr 5 13:05:02 VOID kernel: WARNING: at drivers/ata/libata-core.c:5186 ata_qc_issue+0x10b/0x308() Apr 5 13:05:02 VOID kernel: Hardware name: To Be Filled By O.E.M. Apr 5 13:05:02 VOID kernel: Modules linked in: ntfs md_mod xor ide_gd_mod atiixp ahci r8169 mvsas libsas scst scsi_transport_sas Apr 5 13:05:02 VOID kernel: Pid: 19105, comm: hdparm Not tainted 2.6.32.9-unRAID #8 Apr 5 13:05:02 VOID kernel: Call Trace: Apr 5 13:05:02 VOID kernel: [<c102449e>] warn_slowpath_common+0x60/0x77 Apr 5 13:05:02 VOID kernel: [<c10244c2>] warn_slowpath_null+0xd/0x10 Apr 5 13:05:02 VOID kernel: [<c11b624d>] ata_qc_issue+0x10b/0x308 Apr 5 13:05:02 VOID kernel: [<c11ba260>] ata_scsi_translate+0xd1/0xff Apr 5 13:05:02 VOID kernel: [<c11a816c>] ? scsi_done+0x0/0xd Apr 5 13:05:02 VOID kernel: [<c11a816c>] ? scsi_done+0x0/0xd Apr 5 13:05:02 VOID kernel: [<c11baa40>] ata_sas_queuecmd+0x120/0x1d7 Apr 5 13:05:02 VOID kernel: [<c11bc6df>] ? ata_scsi_pass_thru+0x0/0x21d Apr 5 13:05:02 VOID kernel: [<f843969a>] sas_queuecommand+0x65/0x20d [libsas] Apr 5 13:05:02 VOID kernel: [<c11a816c>] ? scsi_done+0x0/0xd Apr 5 13:05:02 VOID kernel: [<c11a82c0>] scsi_dispatch_cmd+0x147/0x181 Apr 5 13:05:02 VOID kernel: [<c11ace4d>] scsi_request_fn+0x351/0x376 Apr 5 13:05:02 VOID kernel: [<c1126798>] __blk_run_queue+0x78/0x10c Apr 5 13:05:02 VOID kernel: [<c1124446>] elv_insert+0x67/0x153 Apr 5 13:05:02 VOID kernel: [<c11245b8>] __elv_add_request+0x86/0x8b Apr 5 13:05:02 VOID kernel: [<c1129343>] blk_execute_rq_nowait+0x4f/0x73 Apr 5 13:05:02 VOID kernel: [<c11293dc>] blk_execute_rq+0x75/0x91 Apr 5 13:05:02 VOID kernel: [<c11292cc>] ? blk_end_sync_rq+0x0/0x28 Apr 5 13:05:02 VOID kernel: [<c112636f>] ? get_request+0x204/0x28d Apr 5 13:05:02 VOID kernel: [<c11269d6>] ? get_request_wait+0x2b/0xd9 Apr 5 13:05:02 VOID kernel: [<c112c2bf>] sg_io+0x22d/0x30a Apr 5 13:05:02 VOID kernel: [<c112c5a8>] scsi_cmd_ioctl+0x20c/0x3bc Apr 5 13:05:02 VOID kernel: [<c11b3257>] sd_ioctl+0x6a/0x8c Apr 5 13:05:02 VOID kernel: [<c112a420>] __blkdev_driver_ioctl+0x50/0x62 Apr 5 13:05:02 VOID kernel: [<c112ad1c>] blkdev_ioctl+0x8b0/0x8dc Apr 5 13:05:02 VOID kernel: [<c1131e2d>] ? kobject_get+0x12/0x17 Apr 5 13:05:02 VOID kernel: [<c112b0f8>] ? get_disk+0x4a/0x61 Apr 5 13:05:02 VOID kernel: [<c101b028>] ? kmap_atomic+0x14/0x16 Apr 5 13:05:02 VOID kernel: [<c11334a5>] ? radix_tree_lookup_slot+0xd/0xf Apr 5 13:05:02 VOID kernel: [<c104a179>] ? filemap_fault+0xb8/0x305 Apr 5 13:05:02 VOID kernel: [<c1048c43>] ? unlock_page+0x18/0x1b Apr 5 13:05:02 VOID kernel: [<c1057c63>] ? __do_fault+0x3a7/0x3da Apr 5 13:05:02 VOID kernel: [<c105985f>] ? handle_mm_fault+0x42d/0x8f1 Apr 5 13:05:02 VOID kernel: [<c108b6c6>] block_ioctl+0x2a/0x32 Apr 5 13:05:02 VOID kernel: [<c108b69c>] ? block_ioctl+0x0/0x32 Apr 5 13:05:02 VOID kernel: [<c10769d5>] vfs_ioctl+0x22/0x67 Apr 5 13:05:02 VOID kernel: [<c1076f33>] do_vfs_ioctl+0x478/0x4ac Apr 5 13:05:02 VOID kernel: [<c105dcdd>] ? do_mmap_pgoff+0x232/0x294 Apr 5 13:05:02 VOID kernel: [<c1076f93>] sys_ioctl+0x2c/0x45 Apr 5 13:05:02 VOID kernel: [<c1002935>] syscall_call+0x7/0xb Apr 5 13:05:02 VOID kernel: ---[ end trace 67ab7e794839da63 ]--- Apr 5 13:05:09 VOID kernel: sas: command 0xc44b3300, task 0xc3d1e000, timed out: BLK_EH_NOT_HANDLED Apr 5 13:05:27 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:05:27 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 13:05:27 VOID kernel: sas: trying to find task 0xc3d1e000 Apr 5 13:05:27 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e000 Apr 5 13:05:27 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:05:27 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e000 Apr 5 13:05:27 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:05:27 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e000 failed to abort Apr 5 13:05:27 VOID kernel: sas: task 0xc3d1e000 is not at LU: I_T recover Apr 5 13:05:27 VOID kernel: sas: I_T nexus reset for dev 0400000000000000 Apr 5 13:05:27 VOID kernel: sas: I_T 0400000000000000 recovered Apr 5 13:05:27 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:05:35 VOID kernel: sas: command 0xc4465c00, task 0xc3d1e280, timed out: BLK_EH_NOT_HANDLED Apr 5 13:05:58 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:05:58 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 13:05:58 VOID kernel: sas: trying to find task 0xc3d1e280 Apr 5 13:05:58 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e280 Apr 5 13:05:58 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:05:58 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e280 Apr 5 13:05:58 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:05:58 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e280 failed to abort Apr 5 13:05:58 VOID kernel: sas: task 0xc3d1e280 is not at LU: I_T recover Apr 5 13:05:58 VOID kernel: sas: I_T nexus reset for dev 0500000000000000 Apr 5 13:05:58 VOID kernel: sas: I_T 0500000000000000 recovered Apr 5 13:05:58 VOID kernel: sas: trying to find task 0xc3d1e140 Apr 5 13:05:58 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140 Apr 5 13:05:58 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:05:58 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140 Apr 5 13:05:58 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:05:58 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort Apr 5 13:05:58 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover Apr 5 13:05:58 VOID kernel: sas: I_T nexus reset for dev 0400000000000000 Apr 5 13:05:58 VOID kernel: sas: I_T 0400000000000000 recovered Apr 5 13:05:58 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:06:06 VOID kernel: sas: command 0xf62e5a80, task 0xc3d1e000, timed out: BLK_EH_NOT_HANDLED Apr 5 13:06:29 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:06:29 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 13:06:29 VOID kernel: sas: trying to find task 0xc3d1e000 Apr 5 13:06:29 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e000 Apr 5 13:06:29 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:06:29 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e000 Apr 5 13:06:29 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:06:29 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e000 failed to abort Apr 5 13:06:29 VOID kernel: sas: task 0xc3d1e000 is not at LU: I_T recover Apr 5 13:06:29 VOID kernel: sas: I_T nexus reset for dev 0400000000000000 Apr 5 13:06:29 VOID kernel: sas: I_T 0400000000000000 recovered Apr 5 13:06:29 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:07:00 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:07:00 VOID kernel: sas: Enter sas_scsi_recover_host Apr 5 13:07:00 VOID kernel: sas: trying to find task 0xc3d1e140 Apr 5 13:07:00 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140 Apr 5 13:07:00 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5 Apr 5 13:07:00 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140 Apr 5 13:07:00 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5 Apr 5 13:07:00 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort Apr 5 13:07:00 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover Apr 5 13:07:00 VOID kernel: sas: I_T nexus reset for dev 0400000000000000 Apr 5 13:07:00 VOID kernel: sas: I_T 0400000000000000 recovered Apr 5 13:07:00 VOID kernel: sas: --- Exit sas_scsi_recover_host Apr 5 13:07:31 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED Apr 5 13:07:31 VOID kernel: sas: Enter sas_scsi_recover_host By following the same process as the first time I have been able to successfully reproduce this error twice now. I haven't installed or changed anything recently on the server except for enabling the unraid swap file addon in package manager today (though I am unsure if that was before or after I started having these problems). I am running a parity check now and will have to test when it is finished if the problem still persists without the swap file addon enabled. I am aware it is recommended to upgrade to the latest stable release and see if the problem still persists, I just have not had the time to research the process and I wouldn't want to upgrade now until I get whatever this issue is sorted. So I guess bottom line what I am asking is, are these errors something I should be concerned with as far as the integrity of my array goes? Or is it a bug of some kind that may very well be fixed if I do upgrade to the new version of unraid? EDIT: I forgot to mention, the stock web interface continues to work fine even when I can't connect to unmenu. Not sure if that matters just trying to provide all the potentially relevant info I can think of. syslog.zip
  12. are you sure you are runing snap.sh from the /boot/config/snap folder?
  13. nevermind everything seems to work now
  14. Yea i figured as much. I guess ill RMA it out. So the bad sectors in the drive is what was causing those errors in my 2nd log? or do those indicate another issue?
  15. Preclear is done and log is attached preclear_results.txt
  16. The Post read on the drive is at 83% i will post results of a long smart test as soon as it is finished.
  17. Running unraid 4.7 My Specs are the same as Eskro's 20 drive array he posted in the forum with the exception of a higher watt power supply. seeing some weird errors in the syslog. Is this a bad drive? Or possibly a bad power connector or sata cable? Maybe a bad connector on the addon card? Here is how it went down: The first time: I originally had the drive plugged into P2 of my 4 port SAS Cable. i tried to preclear the drive it produced 4GB of errors in about 15 minutes in the syslog and locked up the server, but i believe this was due to a bad power connector or SATA cable as i plugged the drive in to my windows computer and ran check disk and it seemed to finish okay. i have a drive plugged into P1 on the same SAS cable that works fine so i believe the connector on the addon card is good so it might just be that P2 is the problem. Here is the logs from the first attempt: http://rghost.net/private/45200731/9eeca3bb225bdac0509ee0287a3a39bd (NOTE: unzipped these files are a whopping 4GB so make sure you have enough space. you will also not be able to open them with notepad it will throw an error. google LTFViewer 5.2 for a good program to use). Second Time around: i swapped out the SATA cable (ditched the SAS altogether and used my last free port on my 2x SATA addon card) and the power adapter, ran a short smart test (which passed) and the drive is nearly done pre clearing now but i'm still getting intermittent errors that are different from the first time around. See log file attached to Post. Also of note for a while on step 1 of 10 of preclear the MyMain Smart View for unmenu was reporting 33 bad sectors though now it has disappeared for some reason. Thoughts? Should i just RMA the drive to be on the safe side? Any Suggestions would be helpful. -Chris syslog-2013-04-10.txt
  18. Unraid 4.7 Any idea what these errors mean? Full Log is attached. syslog-2012-12-02.txt
  19. ah thanks that makes sense now that i think about it haha. thanks!
  20. EDIT2: actually its not solved. So heres my case. I have 4 disks (shown here): My mover script is currently running but only seems to move between disks 1 and 3 (as you can see disk 4 still has a tremendous amount of space) so i don't understand why it isn't copying to disk 4. It accessed disk 4 for maybe 30 minutes, wrote a small amount of data and then switched back to disk 1. All my shares are set to high water with minimum of 40GB, i left the disk include/exclude blank, so i'm not really sure why its not writing much data to disk 4. Infact just as i was writing this post i saw unraid spin up disk4 and watch the used% go up to 26% and then it spun up disk1 and started writing to it again. I'm running unraid 4.7 my system specs are the same as the ones recommended in eskros 16 drive build. And i would post my system log but it is literally just 4,000 lines of stuff like this: Nov 29 12:20:01 Void logger: >f+++++++++ TV Shows/Blue Mountain State/Season 2/Blue Mountain State.S02E12.Trap Game.avi Nov 29 12:20:16 Void logger: ./TV Shows/Blue Mountain State/Season 2/Blue Mountain State.S02E10.Hockey.avi Nov 29 12:20:16 Void logger: .d..t...... TV Shows/Blue Mountain State/Season 2/ Nov 29 12:20:16 Void logger: >f+++++++++ TV Shows/Blue Mountain State/Season 2/Blue Mountain State.S02E10.Hockey.avi Nov 29 12:20:29 Void logger: ./TV Shows/Blue Mountain State/Season 2 Nov 29 12:20:29 Void logger: .d..t...... TV Shows/Blue Mountain State/Season 2/ if that is somehow useful to you i can post it.