Auggie

Members
  • Posts

    387
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Auggie

  1. I may give that a try, but my original server had it's lone SASLP maxxed out at 8 drives (plus 6 drives on the motherboard ports) and did not experience any of the perceived performance issues that I'm seeing on my new server. At least it's good to know I will still see performance improvements on any SAS2LP card installed in a x4 slot.
  2. BTW, if I read the mobo specs correctly, it only has two x8 PCIe slots, with the other two being "x4 in x8 slots" so I'm not sure if having a third SAS2LP card will provide any performance benefit over the older SASLP card in that x4 slot...?
  3. I restarted my Media Server (v5.0.5) and ever since I can no longer mount my primary User share on any device (at least OS X machines thus far); they all sit there in a seemingly never-ending wait cycle. Subsequent restarts have not resolved the issue. When I attempt to view my primary User share contents via the web GUI ("Shares" tab), the web page also appears to be stuck waiting for the page to be drawn. There are no apparent errors in the syslog. I can access the individual drives via web GUI as well as mounting successfully on client machines (though I haven't tried all 23 data drives yet). The SMART plug-in via unMENU shows no errors. I started a Resierfs check on the first data drive and it's still at it after an hour and a half (4TB drive). There are 13 4TB, 6 3TB, and 4 2TB data drives, and I'm afraid this will take at least a week to run a Resierfs check on every one of them, or longer since I cannot sit in front of the computer 24/7 to immediately interpret results and initiate the next scan. Once the first drive check completes, I will test all other User shares, mount all drives individually, and try mounting on non-OS X devices. What are my other courses of action beyond these?
  4. I got the SASLP-MV8's for the "new" server since I had no problems with my "old" server, they were supported by UnRAID, and I didn't know there were any newer versions of MV8's. Before resorting to getting a new mobo/CPU, I'll probably consider switching to the SAS2LP's; I can replace them one at a time to see if I see any marked performance improvements. Yes, my sig of the "new" server is the target server in question that I'm experiencing issues with. Running the most current 5.0.5.
  5. When I outgrew my maxxed-out 14-drive server, I built another server on a used X9SCM-F mobo and Sandy Bridge 2.3GHz processor. All other components, including RAM, were brand new. The original server I re-tasked as a general purpose file server with a whole different set of drives. I moved all the drives and expanded to the maximum capacity of 24 drives into the new server. It has always been somewhat sluggish accessing it's shares, both user shares and individual disks. I kept thinking perhaps the performance issue was Mac related (especially under Mavericks), but I've never had any performance issues accessing the Atom-based old server from the same Mac, before or after. And when I tried accessing the new server from a non-Mac machine (PCH C-200 to be specific), it, too, seems to get hung up for awhile at times. And so too with the web interface: sometimes there would be a long delay before the page is completed or refreshed. There are never any errors generated in any form or fashion. The machine itself never crashes, and if I don't have delays accessing it via telnet or working the command directly on the machine. I've already sent the mobo to SuperMicro for a dead PCI slot issue (covered under warranty) so I assume they would have performed other diagnostics to determine if the motherboard was defective in any other way. I'm at my wits end and contemplating buying a factory brand-new motherboard and CPU just to check if the used motherboard and/or CPU are suspect. Or is it because of more overhead with 24 drives? Are there any other troubleshooting methods I can try before taking such a drastic step?
  6. Yes, only with unRAID v6. Under v5 it works perfectly. I just upgraded v6b3 on my second, smaller unRAID and that's when I immediately noticed the that error with SMART History feature...
  7. Reviving this thread because I believe my issue is pertinent to this "support" thread: Will an unRAID v6 version be supported (or is specifically building a 64-bit version)? I have this installed in both a v5 and the beta v6 but it is in v6 where there are problems: The above error is repeated ad nauseum before displaying the statistics of each drive.
  8. Search for the 64bit plugin. Not clear if all the bugs are worked out. I get a TON of hits for "64 bit plugin" so I'm not sure what exactly I'm supposed to be looking for as I'm not versed at all with the plugin architecture/offerings of UnRAID or unMENU. I did find a thread regarding the APCUPSD plugin being adapted for 64bit UnRAID v6 (http://lime-technology.com/forum/index.php?topic=31496.msg285638#msg285638): is this what you are referring to specific to my concern with APC battery backup? Or is there a general plugin "package" assembled for 64-bit v6? Under unRAID v4/5, unMENU's package manager provided direct management of the most popular "plugins" (if that's the correct terminology for the "packages" managed by unMENU) that downloads user-selected packages, including the APC battery backup management package, and this is why I never bothered to understand further the package/plug-in architecture of unRAID/unMENU as all the functionality I needed was provided through unMENU and these are the add-ons that I was concerned with regarding unRAID v6...
  9. I installed it on my test/dev system running unRAID 6.0b2 and it worked fine. I also recall Joe L. replying that as a script it doesn't call any procedures that are strickly 32-bit (well, something to that effect). However, the packages you can install from unMenu will not work. Are any packages usable under UnRAID 6/64-bit? Or are we at the mercy of the package devs to update them? I'm concerned with the APC battery backup package...
  10. Yea, butt.... http://www.memory4less.com/m4l_itemdetail.aspx?itemid=1473738641&partno=HUS726060ALS640 So, no thanks. Will wait patiently for the more pedestrian price-friendly 5TB for now.
  11. Sorry, I blame the script. The other script works just fine, thank you very much. An answer like yours is surely flame bait as we ALL rely on some technology for monitoring and reporting; including unRAID and SMART and all the other doodads. Unless you are a hermit who stands by your terminal all day watching and listening and touching your hardware 24/7.
  12. Wow, happened again; the script suddenly was only setting the RPM's to roughly half the maximum speeds of the capabilities of the fans. I forgot to change the go script so after a reboot this fan_speed.sh script was started up and I decided to let it run. Then a couple days later when I checked in on the server almost all the drive temps were in the 50 degree Celsius range! (I got to fix my drive temp warning e-mails). Absolutely unacceptable as this could have resulted in permanent hardware failures. Maybe if I had the time I would scrutinize the script line-by-line, but something is not working with my X9SCM-F board. Attached is the script in case someone wants to gander at what could be causing the RPM issues... fan_speed.sh.zip
  13. What are the commands you added to the go script to get this running on cron? I don't remember where I found this, but I'm using the following in my go script: # Load adapter drivers: modprobe ipmi-si # Load chip drivers: modprobe coretemp modprobe w83627ehf # Insert unraid-fan-speed.sh into crontab chmod +x /boot/scripts/unraid-fan-speed.sh crontab -l >/tmp/crontab grep -q "unraid-fan-speed.sh" /tmp/crontab 1>/dev/null 2>&1 if [ "$?" = "1" ] then crontab -l | egrep -v "control unRAID fan speed based on temperature:|unraid-fan-speed.sh" >/tmp/crontab echo "#" >>/tmp/crontab echo "# control unRAID fan speed based on temperature" >>/tmp/crontab echo "*/2 * * * * /boot/scripts/unraid-fan-speed.sh 1>/dev/null 2>&1" >>/tmp/crontab cp /tmp/crontab /var/spool/cron/crontabs/root- crontab /tmp/crontab fi The problem is unRAID 5 is choking on this line "/tmp/crontab /var/spool/cron/crontabs/root-" (specifically, something to do with the root account) and reports this constantly in the syslog. So I'm trying to understand the logic behind this copy command followed by the crontab command in order to modify it to work under unRAID 5, though even with these errors, the script appears to still be running in a timely fashion and correctly adjusting my fans.
  14. I'm trying out this fan_speed.sh script by aiden located at http://lime-technology.com/forum/index.php?topic=11310.msg108144#msg108144 and it seems to work fine over the past several days, but today I noticed my temps getting into the 50 degree Celcius range, which the script correctly identified and attempted to adjust fan speed, however, the fan speeds are only lounging at 750-ish RPMs and not the normal 1,400-ish RPM's at 100%: Sep 22 10:08:03 UnRAID logger: fan_speed: Highest disk drive temp is: 50C Sep 22 10:08:03 UnRAID logger: fan_speed: Changing disk drive fan speed from: 116 (45% @ 715 rpm) to: FULL (100% @ 712 rpm) Sep 22 10:10:14 UnRAID logger: fan_speed: Highest disk drive temp is: 51C Sep 22 10:10:14 UnRAID logger: fan_speed: Changing disk drive fan speed from: 121 (47% @ 744 rpm) to: FULL (100% @ 743 rpm) Sep 22 10:12:24 UnRAID logger: fan_speed: Highest disk drive temp is: 51C Sep 22 10:12:24 UnRAID logger: fan_speed: Changing disk drive fan speed from: 121 (47% @ 738 rpm) to: FULL (100% @ 743 rpm) Sep 22 10:14:35 UnRAID logger: fan_speed: Highest disk drive temp is: 51C Sep 22 10:14:35 UnRAID logger: fan_speed: Changing disk drive fan speed from: 121 (47% @ 739 rpm) to: FULL (100% @ 740 rpm) Sep 22 10:16:46 UnRAID logger: fan_speed: Highest disk drive temp is: 51C Sep 22 10:16:46 UnRAID logger: fan_speed: Changing disk drive fan speed from: 126 (49% @ 777 rpm) to: FULL (100% @ 774 rpm) Sep 22 10:18:57 UnRAID logger: fan_speed: Highest disk drive temp is: 51C Sep 22 10:18:57 UnRAID logger: fan_speed: Changing disk drive fan speed from: 126 (49% @ 770 rpm) to: FULL (100% @ 770 rpm) Sep 22 10:21:07 UnRAID logger: fan_speed: Highest disk drive temp is: 52C Sep 22 10:21:07 UnRAID logger: fan_speed: Changing disk drive fan speed from: 126 (49% @ 775 rpm) to: FULL (100% @ 772 rpm) Sep 22 10:23:18 UnRAID logger: fan_speed: Highest disk drive temp is: 52C Sep 22 10:23:18 UnRAID logger: fan_speed: Changing disk drive fan speed from: 126 (49% @ 778 rpm) to: FULL (100% @ 776 rpm) I verified the fan RPM's were only in the 750-ish range: root@UnRAID:~# cat /sys/class/hwmon/hwmon1/device/fan4_input 771 root@UnRAID:~# cat /sys/class/hwmon/hwmon1/device/fan5_input 761 root@UnRAID:~# cat /sys/class/hwmon/hwmon1/device/fan6_input cat: /sys/class/hwmon/hwmon1/device/fan6_input: No such file or directory root@UnRAID:~# cat /sys/class/hwmon/hwmon1/device/fan3_input 777 root@UnRAID:~# Yesterday, the script was managing the fan speeds just fine: Sep 21 19:11:49 UnRAID logger: fan_speed: Highest disk drive temp is: 38C Sep 21 19:11:49 UnRAID logger: fan_speed: Changing disk drive fan speed from: FULL (100% @ 1464 rpm) to: 228 (89% @ 1326 rpm) Sep 21 19:15:58 UnRAID logger: fan_speed: Highest disk drive temp is: 37C Sep 21 19:15:58 UnRAID logger: fan_speed: Changing disk drive fan speed from: 228 (89% @ 1324 rpm) to: 205 (80% @ 1206 rpm) Sep 21 19:32:16 UnRAID logger: fan_speed: Highest disk drive temp is: 36C Sep 21 19:32:16 UnRAID logger: fan_speed: Changing disk drive fan speed from: 205 (80% @ 1204 rpm) to: 182 (71% @ 1076 rpm) Sep 21 19:34:23 UnRAID logger: fan_speed: Highest disk drive temp is: 37C Sep 21 19:34:23 UnRAID logger: fan_speed: Changing disk drive fan speed from: 182 (71% @ 1076 rpm) to: 205 (80% @ 1207 rpm) Sep 21 19:36:29 UnRAID logger: fan_speed: Highest disk drive temp is: 36C Sep 21 19:36:29 UnRAID logger: fan_speed: Changing disk drive fan speed from: 205 (80% @ 1198 rpm) to: 182 (71% @ 1075 rpm) Sep 21 19:40:38 UnRAID logger: fan_speed: Highest disk drive temp is: 37C Sep 21 19:40:38 UnRAID logger: fan_speed: Changing disk drive fan speed from: 182 (71% @ 1084 rpm) to: 205 (80% @ 1209 rpm) Sep 21 19:42:45 UnRAID logger: fan_speed: Highest disk drive temp is: 36C Sep 21 19:42:45 UnRAID logger: fan_speed: Changing disk drive fan speed from: 205 (80% @ 1200 rpm) to: 182 (71% @ 1074 rpm) Sep 21 19:44:52 UnRAID logger: fan_speed: Highest disk drive temp is: 37C Sep 21 19:44:52 UnRAID logger: fan_speed: Changing disk drive fan speed from: 182 (71% @ 1082 rpm) to: 205 (80% @ 1207 rpm) Sep 21 19:47:00 UnRAID logger: fan_speed: Highest disk drive temp is: 36C Sep 21 19:47:00 UnRAID logger: fan_speed: Changing disk drive fan speed from: 205 (80% @ 1201 rpm) to: 182 (71% @ 1075 rpm) Sep 21 19:57:21 UnRAID logger: fan_speed: Highest disk drive temp is: 37C Sep 21 19:57:21 UnRAID logger: fan_speed: Changing disk drive fan speed from: 182 (71% @ 1075 rpm) to: 205 (80% @ 1198 rpm) Sep 21 19:59:28 UnRAID logger: fan_speed: Highest disk drive temp is: 36C Sep 21 19:59:28 UnRAID logger: fan_speed: Changing disk drive fan speed from: 205 (80% @ 1198 rpm) to: 182 (71% @ 1080 rpm) Sep 21 20:03:39 UnRAID logger: fan_speed: Highest disk drive temp is: 37C Sep 21 20:03:39 UnRAID logger: fan_speed: Changing disk drive fan speed from: 182 (71% @ 1082 rpm) to: 205 (80% @ 1201 rpm) I ran a different fan script, unraid-fan-speed.sh by Pauven, http://lime-technology.com/forum/index.php?topic=5548.105, and it correctly set the fan speeds to their normal RPMs at 100%: root@UnRAID:/boot/scripts# unraid-fan-speed.sh Linear PWM Range is 80 to 255 in 6 increments of 29 Highest temp is: 52 Setting pwm to: 255 root@UnRAID:/boot/scripts# cat /sys/class/hwmon/hwmon1/device/fan4_input 1473 root@UnRAID:/boot/scripts# This suggests a script issue and not an OS issue, so for the time being, I'm switching back to Pauven's script and cron.
  15. I assume you actually mean peripheral cables, since the 4224 uses molex, not sata power connectors. Ya, I totally was in the "SATA" mind-set when writing the post since it has been a month or so since getting it buttoned up and forgot that the cage backplanes use standard Molex. Shame about Seasonic not offering the cables themselves and referring to a third party, though eventually I may replace the splitter with one of them 3-Molex modular cables from the link; this is really just an aesthetic preference only as it's working just fine with the OEM modular cable with one splitter.
  16. From what I recall, 16c was released before the "final" was officially put up.
  17. After almost an exhaustive 43 hours of waiting for the pre-clear script to complete its tasks on a 2TB drive, success! I was able to add it to the array where unRAID immediately went straight to the option to Format the drive. So there appears to be a bug that I've come across in v5.0 rc16c in regards to its built-in clearing module: perhaps it involves a "used" drive formatted in NTFS. Anyways, if I come across it again in the remaining 3 drives I need to add to the array to bring to 24 drives, I will remember to invoke the -n switch.
  18. Yea, lesson learned! But I thought I was past having to manually pre-clear drives with unRAID 5. Another thing of note, the first time around I couldn't access the shares of the server, but after stopping, starting the array, then reinitiating the pre-clear, I can now access them. Weird bug, but nothing to write home about presently as I'm not sure if what exactly prevented the shares from being accessible the first time, e.g. could it have been the pre-clear script or was it a more obscure bug of unRAID 5? On step 2 of the 10 step pre-clearing process after a whole night of running...
  19. Okay, I got the pre-clear script running for the better half of a day, but I had to restart my computer that the telnet session running the script. How can I tell if the preclear_disk.sh script is still running? I ran a "ps -ef" command but don't see it listed, but since it's a script, I'm not sure how it would be listed in the processes. The script had definitely finished the first phase, and was at 2% on the second phase. BTW, during at least the first phase, the array was not accessible from any remote clients through SMB or NFS, either PC's or Media Players (e.g. PCH), even though the web GUI was up and responding to all selections, and so was unMENU. EDIT: I think with the closing of the remote telnet session that terminated the running pre-clearing script before it finished, because I went ahead and stopped the array then added the target drive and unRAID still provided only the Clear option. (sigh) I now initiated the script again via IPMI so the it's truly running off a session on the machine itself. Have to wait until tomorrow morning to see if it this whole process is successful in leading to the target drive getting formatted.
  20. Yea. I stil had a 2009 version of the pre-clear script on my flash drive, so I moseyed on over to the pre-clear script thread and noticed a newer version. When I ran the -l (eligible pre-clear drives list) it reported no drives. Hrmph. Knowing that the target drive ID was "sdu," I went ahead with "pre_clear_disk.sh /dev/sdu" but the script came back with: I am assuming this was a result of it being initially added to the array, going through the unsuccessful built-in clearing operation, and that removing it from the array did not automatically unmount it. unMenu doesn't offer the option to unmount; just the option to "create ResierFS" on the "sdu1" partition on the target drive, so I will restart the computer to see if it starts up unmounted. I'm in the middle of a lengthy file transfer to the array so I'll do the restart after the transfer completes and report back.
  21. Just some follow-ups for future reference in case anyone else is contemplating similar hardware: 1) 0.5 meters is the perfect length for SFF-8087 cables that allows it to snake through the fan wall with a straight shot to the HBA's, with no extra slack to cause any unnecessary air flow impediments. 2) I settled on the fan-less SeaSonic SS-520FL2 520W PSU, and coupled with 120mm PWM Noctua's on the fan wall plus 80mm PWM Noctua's on the back, the server is practically silent: top priority since it's located with the component rack of my home theater. I did have to resort to one SATA power splitter because the SS-520FL2 strangely comes with one 3-plug SATA cable and one 2-plug SATA cable; not enough to service the 6 SATA power receptacles of the drive backplanes.
  22. Yes, I'm familiar with that script from the 4.x days. So you are saying this script, if it completes successfully, will mark the drive as ready to be formatted under 5.0, yes?
  23. I've been migrating to a Norco RPC-4224 and adding 7 formerly external USB drives filled with data to the array one-by-one and have come across this error during pre-clearing on the sixth drive: ... Sep 16 17:19:15 UnRAID emhttp: clear: 98% complete Sep 16 17:25:07 UnRAID emhttp: clear: 99% complete Sep 16 17:31:05 UnRAID emhttp: clear: 100% complete Sep 16 17:37:10 UnRAID emhttp: pclear_new_disks: write: No space left on device then unRAID presents only the option to Clear the disk (again) instead of the usual next option to Format disk after a clearing operation. I pressed the Clear one more time but after the hours-long operation got the same error at the end and only the Clear option presented. I ran a short SMART test on both the suspect drive (sdu) and the parity drive and they both passed. The syslog does not show any significant drive errors; just the "pclear_new_disks: write: No space left on device". Prior to the clearing operations, I had successfully copied all 2TB of files from this drive to the array with no errors (all within the reporting period of the syslog attached). So what should be my next step? Replace the drive, even though there are no drive-related errors indicated? If this is a "bug" (even though I had successfully cleared and formatted 5 drives previously) is there a way to manually mark the drive as "cleared" to allow me to go to the Format option? syslog-2013-09-17.txt.zip SMART_Reports.zip
  24. this is a known ipad 'issue' with most sites. the 'fix' is just that apple makes you have to use the two finger scrolling gesture I've now confirmed this is absolutely not an iOS-specific issue, as the lack of scrolling abilities when the vertical buttons are not fully viewable is present in both Mac and Windows web browsers, whenever the browser window is not large enough to display the entire frame. I believe it has something to do with the frames used to display the sections of the interface; but I'm not web designer so I'm just making an educated guess. For the vertical orientation of the "tabs", no vertical scroll bar is ever displayed when the browser window height is too short; for the detail frame contents, no horizontal scroll bar is presented if the browser window is not wide enough.
  25. Yep, that did the trick... But now myMain page of unMenu don't have its images load with an error about setting "ImageHost". I am assuming the 07-unmenu-mymain.awk uses its own internal declaration or assignment for the servers name? if so, i think its a bug but in the meantime, how or where do i correct/set this without mucking it up?