whymse

Members
  • Posts

    8
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

whymse's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi opentoe, Can I ask how you've hooked up those 3x NF-F12s, 2x back fans and CPU fan? I'm not seeing a schematic on the Sabertooth X79, but does it have 6x fan headers? I'm trying to connect up the same setup onto a board with 5x fan headers and wondering how you're hooked up and if you're doing any kind of fan control for noise. Thanks!
  2. Hi All, I don't think this has been exactly addressed on the forum, so hoping for a bit of guidance, please. What I want to do is control the fan speed of the 5x case fans (BIOS can control the CPU fan). I'm running Unraid 5.0.4 virtualized on ESXi. I'm currently running the 3x 120mm fans and 2x 80mm case fans through a BitFenix Recon, but it's a bit hackish with the cables hanging out the back of the case and the Recon not supporting linux for RPM control. So first question: Given the motherboard supports FAN1-4 and FAN-A, how are people connecting their 6 total fans to the 5 ports and getting RPM control? Is there a splitter that would allow me to connect the 2x 80mm fans to a single FAN-X header and get RPM control to both at the same time? Second question: I am not seeing a solution to the ESXi + Supermicro + sensors issues... Would going the IPMI route (compiling ipmitool or upgrading unraid to v6 to get plugins that have ipmi support) get me fan control across those 4 fans? Thanks very much in advance... I found a bunch of ESXi+Supermicro+Sensor posts, but they aren't giving concrete directions as to which way people have gone to support fan control + esxi + supermicro. I assume some have given up, some have gone bare metal and others might have solved their issue but not posted. Thanks!
  3. Sorry to reply to my own post, but a new data point as it is running _very_ slow through a parity check: The system appears to be repeatedly running smartctl against the sdl drive. root@Tower:/var/log# ps auwx | grep smart root 2606 1.0 0.0 3552 1224 ? D 22:43 0:00 /usr/sbin/smartctl -n standby -A /dev/sdl root 2607 1.0 0.0 0 0 ? Z 22:43 0:00 [smartctl] <defunct> root 2608 1.0 0.0 0 0 ? Z 22:43 0:00 [smartctl] <defunct> root 2609 1.0 0.0 0 0 ? Z 22:43 0:00 [smartctl] <defunct> root 2611 0.0 0.0 2448 588 pts/2 R+ 22:43 0:00 grep smart root@Tower:/var/log# ps auwx | grep smart root 2644 0.0 0.0 3552 1220 ? D 22:43 0:00 /usr/sbin/smartctl -n standby -A /dev/sdl root 2645 0.0 0.0 0 0 ? Z 22:43 0:00 [smartctl] <defunct> root 2646 0.0 0.0 0 0 ? Z 22:43 0:00 [smartctl] <defunct> root 2647 0.0 0.0 0 0 ? Z 22:43 0:00 [smartctl] <defunct> root 2649 0.0 0.0 448 4 pts/2 R+ 22:43 0:00 grep smart root@Tower:/var/log# ps auwx | grep smart root 2709 0.0 0.0 2448 588 pts/2 S+ 22:44 0:00 grep smart Any help would be greatly appreciated. I've been down for days now and really just want to know how to replace this drive or if it needs to be replaced? The system has all drives as green at the moment.
  4. Hi all, Been running a 16 (15 + parity) array for almost a year without any issues. Just had my first issue and hoping for some guidance so I don't lose any data (or minimize data loss). I'm running version 5.0.4 on a Supermicro X9SCM-F-O on ESXi. There are two M1015s both flashed to P15. I was watching a movie when streaming froze and I checked dmesg on unraid to find: sd 2:0:2:0: [sdl] CDB: cdb[0]=0x88: 88 00 00 00 00 01 3c 46 29 c0 00 00 04 00 00 00 scsi target2:0:2: handle(0x000b), sas_address(0x4433221102000000), phy(2) scsi target2:0:2: enclosure_logical_id(0x500605b0022a1530), slot(1) sd 2:0:2:0: task abort: SUCCESS scmd(db84c0c0) sd 2:0:2:0: attempting task abort! scmd(db84c0c0) sd 2:0:2:0: [sdl] CDB: cdb[0]=0x88: 88 00 00 00 00 01 3c 46 29 c0 00 00 04 00 00 00 scsi target2:0:2: handle(0x000b), sas_address(0x4433221102000000), phy(2) scsi target2:0:2: enclosure_logical_id(0x500605b0022a1530), slot(1) sd 2:0:2:0: task abort: SUCCESS scmd(db84c0c0) sd 2:0:2:0: attempting task abort! scmd(db84c0c0) sd 2:0:2:0: [sdl] CDB: cdb[0]=0x88: 88 00 00 00 00 01 3c 46 29 c0 00 00 04 00 00 00 scsi target2:0:2: handle(0x000b), sas_address(0x4433221102000000), phy(2) scsi target2:0:2: enclosure_logical_id(0x500605b0022a1530), slot(1) sd 2:0:2:0: task abort: SUCCESS scmd(db84c0c0) sd 2:0:2:0: attempting task abort! scmd(db84c0c0) sd 2:0:2:0: [sdl] CDB: cdb[0]=0x88: 88 00 00 00 00 01 3c 46 29 c0 00 00 04 00 00 00 scsi target2:0:2: handle(0x000b), sas_address(0x4433221102000000), phy(2) scsi target2:0:2: enclosure_logical_id(0x500605b0022a1530), slot(1) sd 2:0:2:0: task abort: SUCCESS scmd(db84c0c0) Cleanly shut down, rebooted and a parity check kicked off. It grinds along happily at ~90MB/sec until it hit ~77% then those same errors started throwing on the same drive. It then slows down to XXKB/sec and the simplefeatures UI on 80 and the stock UI on 8080 both become _very_ unresponsive. I had run a successful parity check 100 days prior with no issues and have also checked all cabling thoroughly. The drives normally run ~30C. They are now up at 38-40C, because it is 35C outside here and they are grinding through parity. I stopped the check, rebooted, and parity checked again. Again the same process. I will let the parity check continue to plug along at 50KB/sec and hope it eventually speeds up, but the errors are continuing. I have attached a syslog. I tried to run smartctl to attach smart info, but it hung and can't be killed with CTRL+C. Running ps seems to indicate the system already tried to run smart unsuccessfully: I have attached an older smart output for the sdl drive. root@Tower:~# ps ax | grep smart 2193 ? S 0:00 sh -c smartctl -d ata -A /dev/sdl| grep -i temperature 2194 ? D 0:00 smartctl -d ata -A /dev/sdl 8328 pts/1 D+ 0:00 smartctl -a -A /dev/sdl 8848 ? D 0:00 /usr/sbin/smartctl -n standby -A /dev/sdl 8849 ? Z 0:00 [smartctl] <defunct> 8850 ? Z 0:00 [smartctl] <defunct> 8851 ? Z 0:00 [smartctl] <defunct> 9390 pts/2 S+ 0:00 grep smart At this point I assume the drive is done, and I just would like guidance on best ways to proceed. I have a identical brand new (in shrink wrap) 3TB WD Green ready to move forward. A couple drives have thrown ~100-200 errors during the failed parity rebuild processes, so I'm thinking I might have multiple drives dying at the same time. Any suggestions on the best way to proceed to minimize data loss would be greatly appreciated. Thanks so much in advance and happy holidays to all. syslog_and_smart.zip
  5. Thanks for all your work on this. Very much appreciated.
  6. As an update to my above post: The combination of rc10 and the NFS options of defaults,soft,nolock,nfsvers=3,lookupcache=none,noac specified in my fstab seem to be stable after 24 hours of adding/moving/renaming/scanning content. The mounting system is Ubuntu 12.04. The addition of 'lookupcache=none,noac' to 'defaults,soft,nolock,nfsvers=3' stabilized the mount immediately. Prior to that option I would start throwing stale handles from Plex scanning the mount. Would still love to see the issues in rc11/rc12/rc12a resolved, but happy to be able to use the system at the moment.
  7. To add a data point: I have recently set up an unRAID server with the intention of mounting two user shares (Movies & Television) to an ubuntu 12.04 system via NFS. I am unable to perform a scan in Plex scan of the two mounted shares on the Ubuntu system without encountering "missing files"/stale NFS file handles. I have rolled back to rc10 and have tried mounting with: - no options to mount - defaults,soft,nolock,nfsvers=3 Both failed. I am currently testing the following options and will update with success/failure: - defaults,soft,nolock,nfsvers=3,lookupcache=none,noac I just wanted to post this in case anyone saw davekeel's successful posts and got overly optimistic that a rollback to rc10 would fix all encounters of the NFS issue. I'm really hoping this gets sorted in rc13 and in the very near future. Seriously bummed out at not being able to use my new unRAID install due to this issue.