Jump to content

daniel.boone

Members
  • Posts

    347
  • Joined

  • Last visited

Everything posted by daniel.boone

  1. I had disk12 (1TB) fail so I bought new 3TB disk. I backed up entire contents of my USB thumbdrive and precleared new disk successfully. Last night I removed old disk, rebooted confirmed, disk missing, stopped array and added new disk, checked of disk rebuild, started process and walked away until this morning. Looking at it now and see disk has green ball but says unformatted. The new disk is in the array and the array is started but my disk 12 contents are not there. Version 6 Beta 6 Here is the portion of the syslog that I think is relevant. Nov 30 19:04:05 Tower kernel: mdcmd (69): check CORRECT Nov 30 19:04:05 Tower kernel: md: recovery thread woken up ... Nov 30 19:04:05 Tower kernel: md: recovery thread rebuilding disk12 ... Nov 30 19:04:05 Tower kernel: md: using 1536k window, over a total of 2930266532 blocks. Nov 30 19:04:06 Tower emhttp: shcmd (254): :>/etc/samba/smb-shares.conf Nov 30 19:04:06 Tower avahi-daemon[3587]: Files changed, reloading. Nov 30 19:04:06 Tower emhttp: Restart SMB... Nov 30 19:04:06 Tower emhttp: shcmd (255): killall -HUP smbd Nov 30 19:04:06 Tower emhttp: shcmd (256): cp /etc/avahi/services/smb.service- /etc/avahi/services/smb.service Nov 30 19:04:06 Tower avahi-daemon[3587]: Files changed, reloading. Nov 30 19:04:06 Tower avahi-daemon[3587]: Service group file /etc/avahi/services/smb.service changed, reloading. Nov 30 19:04:06 Tower emhttp: shcmd (257): ps axc | grep -q rpc.mountd Nov 30 19:04:06 Tower emhttp: _shcmd: shcmd (257): exit status: 1 Nov 30 19:04:06 Tower emhttp: shcmd (258): /usr/local/sbin/emhttp_event svcs_restarted Nov 30 19:04:06 Tower emhttp_event: svcs_restarted Nov 30 19:04:06 Tower emhttp: shcmd (259): /usr/local/sbin/emhttp_event started Nov 30 19:04:06 Tower emhttp_event: started Nov 30 19:04:07 Tower avahi-daemon[3587]: Service "Tower" (/etc/avahi/services/smb.service) successfully established. Nov 30 19:04:19 Tower kernel: docker0: port 2(vethe19d) entered forwarding state Nov 30 19:04:20 Tower kernel: docker0: port 3(veth4a5e) entered forwarding state Nov 30 19:04:20 Tower kernel: docker0: port 4(vethb767) entered forwarding state Nov 30 19:04:20 Tower kernel: docker0: port 5(vethe50b) entered forwarding state Nov 30 19:06:17 Tower avahi-daemon[3587]: Withdrawing workstation service for veth8101. Nov 30 19:06:17 Tower kernel: docker0: port 1(veth8101) entered disabled state Nov 30 19:06:17 Tower kernel: device veth8101 left promiscuous mode Nov 30 19:06:17 Tower kernel: docker0: port 1(veth8101) entered disabled state Dec 1 01:08:20 Tower kernel: mdcmd (70): spindown 3 Dec 1 01:08:20 Tower kernel: mdcmd (71): spindown 4 Dec 1 01:08:21 Tower kernel: mdcmd (72): spindown 5 Dec 1 01:08:22 Tower kernel: mdcmd (73): spindown 6 Dec 1 01:08:23 Tower kernel: mdcmd (74): spindown 7 Dec 1 01:08:23 Tower kernel: mdcmd (75): spindown 8 Dec 1 01:08:24 Tower kernel: mdcmd (76): spindown 13 Dec 1 01:08:25 Tower kernel: mdcmd (77): spindown 15 Dec 1 04:52:58 Tower kernel: mdcmd (78): spindown 1 Dec 1 04:52:58 Tower kernel: mdcmd (79): spindown 2 Dec 1 04:52:59 Tower kernel: mdcmd (80): spindown 9 Dec 1 04:53:00 Tower kernel: mdcmd (81): spindown 10 Dec 1 04:53:00 Tower kernel: mdcmd (82): spindown 11 Dec 1 04:53:01 Tower kernel: mdcmd (83): spindown 14 Dec 1 06:14:15 Tower kernel: md: sync done. time=40210sec Dec 1 06:14:16 Tower kernel: md: recovery thread sync completion status: 0 Hoping I can still save the original disk 12 contents. Recommendations? Thanks,
  2. I'm moving and hope to minimize any issues with my unRaid server. I have a 4U case mostly full at the point. I was thinking of getting a bulk hard drive shipping container for the drives and packing the server in it's original shipping box. Server will sit in storage for a month or 2. Leaving the drives in the server makes it way heavy. Any advice? Anything I should not do? TIA
  3. I've been noticing some slowness but attributed it to all the plugins and the age of the cache drive. Recently found this thread so I gave sysctl vm.highmem_is_dirtyable a try. Even without formal testing I can see MySQL is functioning so much better. Newznab was painfully slow while populating the database. I was considering abandoning it entirely. It's definable better now. Looking forward to a x64 version. I'm sure the db plugin would benefit greatly.
  4. 2X Kingston KVR1333D3E9SK2/8G DDR3-1333 8GB (2x4GB) ECC Unbuffered Four 4GB modules (16GB total) from my Supermicro MB X9SCM-F-O $70 $60 bucks shipped CONUS. I would sell in 8GB parts for $40 $35 shipped CONUS. Ram works great just recently upgraded. Price reduced, would consider trades. Thanks
  5. If the plugin works install tortoisesvn on your windows machine, create a newznab folder and download from svn using supplied creds. on unraid stop web server, go to newznab installation point, backup the files, delete contents of the newznab folder and copy the files from the windows newznab folder to the newznab unraid folder. restart web server to simplify I've created a shortcut to the unraid newznab folder on my windows machine. I update newznab from the shortcut. From what I read that plugin is fairly new so think wip. If webserver is not working you may be missing cnf file. Here is a link the the file provided by influencer. This fixed my issue. http://lime-technology.com/forum/index.php?topic=24676.msg214363#msg214363 cheers db
  6. I may have traced the root cause to a "working" usb stick. Once I swapped with my backup things stabilized quickly. I went ahead and swap the 3TB seagate parity with a 3TB hitachi anyway. I wasn't comfortable leaving it after so much trouble. I ran the seagate through preclear after removing the HPA. Tested good so I added it as a blank to the array just to see if it hold up. Been about a week and no issues. I've only added one disc so far. If my luck continues after a full month I will add some real content to the disk. I've also updated the FW on the M1015. That fixed me not seeing the hard drives connected to that HBA.
  7. Also wanted to point out the failure seems to happen most frequently after a successful parity check and the drives have spun down. " Parity is Valid:. Last parity check < 1 day ago with no sync errors. " Edit: Smart report added SmartReport.txt
  8. My sig was pretty accurate until I added the IBM card and the latest RC candidate. I do have the SuperMicro MV8 as well as an Adaptec and a re-flashed IBM card attached to that MB. I have 8 ports from my MV8 HBA, 4 from Adaptec and the rest from my MB to make for a total of 16 drives in this array counting the cache and parity. At this time the IBM controller has no array drives attached to it. I just plugged it in in the hope the beta worked and I could add more drives. This card brings the tally to 24 ports on my system. It's unlikely the IBM controller is related to this issue. The seagate drive redballs with or without the IBM card. This exact system works without fault when I load 4.7. I load any beta including the the latest RC a redball happens on this one drive every time. I do appreciate the info on the FW. I'll make sure to update once I have the drive issue sorted. Thanks DB
  9. Same issue I've had since the later betas. I load 4.7 and drive works fine. In fact it has worked fine for some time now. At this point I'm inclined to swap out the drive completely since I don't want to go backwards. The drive in question is a 3TB seagate to which I added a HPA so it would function with the mainstream release. I spotted this in the log file right after spin up. Oct 12 09:02:44 Tower kernel: mdcmd (119): spinup 0 Oct 12 09:02:44 Tower kernel: mdcmd (120): spinup 1 Oct 12 09:02:44 Tower kernel: mdcmd (121): spinup 2 Oct 12 09:02:44 Tower kernel: mdcmd (122): spinup 3 Oct 12 09:02:44 Tower kernel: mdcmd (123): spinup 4 Oct 12 09:02:44 Tower kernel: mdcmd (124): spinup 5 Oct 12 09:02:44 Tower kernel: mdcmd (125): spinup 6 Oct 12 09:02:44 Tower kernel: mdcmd (126): spinup 7 Oct 12 09:02:44 Tower kernel: mdcmd (127): spinup 8 Oct 12 09:02:44 Tower kernel: mdcmd (128): spinup 9 Oct 12 09:02:44 Tower kernel: mdcmd (129): spinup 10 Oct 12 09:02:44 Tower kernel: mdcmd (130): spinup 11 Oct 12 09:02:44 Tower kernel: mdcmd (131): spinup 12 Oct 12 09:02:44 Tower kernel: mdcmd (132): spinup 13 Oct 12 09:02:44 Tower kernel: mdcmd (133): spinup 14 Oct 12 09:02:56 Tower kernel: drivers/scsi/mvsas/mv_sas.c 1952:Release slot [0] tag[0], task [e6926780]: Oct 12 09:02:56 Tower kernel: sas: sas_ata_task_done: SAS error 8a Oct 12 09:02:56 Tower kernel: sd 5:0:0:0: [sdk] command ec61f480 timed out Oct 12 09:02:57 Tower kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 I have 16 drives in a 4U case, a 750 watt power supply and a SuperMicro MB. Cache is used an installation point. All plugins are disabled but I am loading an updated version of samba. I did run a parity check right after the upgrade and got no errors when complete. I had to zip log due to size limits. Smart fails right now. I'll test and post results once I can reboot. Cheers DB syslog-2012-10-12.zip
  10. Surely that's not an estimate of per-drive usage? It would give you a number pretty close to what size PSU you would want though, so if that's what you mean for total-system-usage then I agree This post should clarify things. http://lime-technology.com/forum/index.php?topic=12219.0
  11. Your still using that 500 watt power supply? The additions may have been too much. On my 650 I had trouble on the 13th drive but I'm running a few 7200s.
  12. Rule of thumb is 24w for green, 36w for 7200rpm. Those Norco cases use backplanes so I used only 1 splitter and that was for the fans. Better not to have all those splits in the hard drive power path. Less possible points of failure.
  13. If 2 or more drives fail only the data on the failed drives are lost. All the other drives are readable when mounted to a Linux system. Even on windows with Linux file system utilities. Connect the drive however you want just don't reformat.
  14. Try virtualbox with your current system. You may be surprised. I tried, a few times, and decided to go the vmware route. I grabbed a 1240. Overkill for sure. I'm using spinners just because I have them. Nothing wrong with that. MV8 will work with a minor hack. Should be easy to find on the forum.
  15. This may help http://lime-technology.com/forum/index.php?topic=19517.0
  16. What happens when installed in the x16 slot? If there are 2 of the x16 slots try both. My saslp only worked in one of the slots. If I added any other hba none would work. My GB motherboard, different model, reconfigures the boot order every time I add a drive. Generalz may have nailed your boot issue.
  17. If the bios sees the drives unRaid should as well. I did a quick search but have not found anyone running a h67 board. Is the USB drive new? Maybe try 4.7 on a different thumb drive just to see if drives show up then.
  18. Does the bios see the drives? What power supply are you using? Looks like drives are not seen by unRaid at all. Connector issue either sata or power. Make sure controller is not disabled in bios.
  19. Hi, you mean connect all the other drives at the same time to my new 5.0b11 unraid? yes, only have to shut down and reboot once.
  20. Yes, I would connect all the old drives to avoid multiple reboots.
  21. I've used MOCA before I hard wired. Setup is going to be device dependent. You got the basic gist though. In my case I connect a bridge since other side was already connected to my network with a MOCA capable router. I had the extra router just sitting around. Setting up the bridge was easy. While it worked it wasn't 100%. I would still get drops but I was running all kind of stuff over the bridge. If you have to pay for the adapters I would recommend buying ethernet cable and running it along side the cable wire instead.
  22. A general search of the forum for 3000dm001 indicates it has been used by forum members successfully. Minor issues but it works unlike my ST33000651AS.
  23. The Wiki has all the add ons listed with links to respective posts. http://lime-technology.com/wiki/index.php/UnRAID_Add_Ons I'd recommend unMenu.
×
×
  • Create New...