Jump to content

aiden

Members
  • Posts

    951
  • Joined

  • Last visited

Everything posted by aiden

  1. Very nice and clean. I dig your project link too. What SATA cables are you using?
  2. Hm... TB maybe? LOVE the look of that case. That all drive look is very sexy.
  3. You can't read anything into the RAW values at the end of the lines. What you want to see is any changes in the initial 2 values versus the final two values for a given parameter. In this case, it looks like your drive is fine. 252 basically means "virgin" drive, 100 usually means initialized. Btw, this really should be posted in the results thread.
  4. Yes, my unraid was on the same system and was up during the preclear cycles. In fact, I was writing over 100 GB of data to the array at one point, and only using the onboard SATA ports for all of it. This is all on a D510 Atom system. I would venture a guess that processor speed has little to do with preclearing the drive, and that it is most likely RAM and bus related.
  5. Hm... that's interesting that your speeds are that low. I have precleared several 2 TB drives, from 7200 rpm down to 5400 rpm, and the lowest I've seen was 45 Mb/s. The 7200 scales roughly 40 - 50% faster as a whole, which is what one might expect. Never had one take 52 hours for a preclear cycle. They usually finish @ 35 - 38 hours. That's running off the motherboard ports. My system has 2 GB ram in it, but if I remember correctly, when memory becomes an issue, the script either hangs or exits.
  6. A few preclear cycles should clear up any worries about a new drive. Either it's stable, or it's not. As far as the fail rate goes, I think the manufacturers are playing the consumer market like a song. We constantly replace drives because new capacities are released, or because a drive fails in 3 years. Planned obsolescence?
  7. The 253 on the first drive is an initialization value indicating the drive has never been used before. The other drives look fine. It's odd that you had a dead drive from Limetech, as they supposedly preclear the drives themselves before shipment. Did you buy the drives from somewhere else?
  8. I'm running the cycles via 2 telnet sessions from a laptop that I leave on 24/7. Never logged off the server, never shut it down, and the telnet windows have been moved, minimized, maximized, etc without any issues thus far. It's not a big issue to me, I can easily restart the remaining cycles. I was more curious if it was something more fundamental, like the system just got bored.
  9. Syslog didn't reveal anything. I'll just start the final 4 cycles. Thanks.
  10. Me either. It completely stopped running, on both drives, simultaneously. Is there another log that I can look at to see what happened? Should I just restart with the remaining number of cycles?
  11. I'm using the latest download from here. The modified date is 10/6/2009 9:15. Process list: UID PID PPID C STIME TTY TIME CMD root 1 0 0 Jan08 ? 00:00:02 init root 2 0 0 Jan08 ? 00:00:00 [kthreadd] root 3 2 0 Jan08 ? 00:00:00 [migration/0] root 4 2 0 Jan08 ? 00:00:00 [ksoftirqd/0] root 5 2 0 Jan08 ? 00:00:00 [migration/1] root 6 2 0 Jan08 ? 00:00:00 [ksoftirqd/1] root 7 2 0 Jan08 ? 00:00:00 [events/0] root 8 2 0 Jan08 ? 00:00:00 [events/1] root 9 2 0 Jan08 ? 00:00:00 [khelper] root 14 2 0 Jan08 ? 00:00:00 [async/mgr] root 107 2 0 Jan08 ? 00:00:00 [kblockd/0] root 108 2 0 Jan08 ? 00:00:00 [kblockd/1] root 109 2 0 Jan08 ? 00:00:00 [kacpid] root 110 2 0 Jan08 ? 00:00:00 [kacpi_notify] root 111 2 0 Jan08 ? 00:00:00 [kacpi_hotplug] root 187 2 0 Jan08 ? 00:00:00 [ata/0] root 188 2 0 Jan08 ? 00:00:00 [ata/1] root 189 2 0 Jan08 ? 00:00:00 [ata_aux] root 193 2 0 Jan08 ? 00:00:00 [ksuspend_usbd] root 198 2 0 Jan08 ? 00:00:00 [khubd] root 201 2 0 Jan08 ? 00:00:00 [kseriod] root 262 2 0 Jan08 ? 01:42:23 [pdflush] root 263 2 0 Jan08 ? 01:39:06 [pdflush] root 264 2 3 Jan08 ? 07:13:20 [kswapd0] root 307 2 0 Jan08 ? 00:00:00 [aio/0] root 308 2 0 Jan08 ? 00:00:00 [aio/1] root 314 2 0 Jan08 ? 00:00:00 [nfsiod] root 319 2 0 Jan08 ? 00:00:00 [cifsoplockd] root 547 2 0 Jan08 ? 00:00:00 [usbhid_resumer] root 553 2 0 Jan08 ? 00:00:00 [rpciod/0] root 554 2 0 Jan08 ? 00:00:00 [rpciod/1] root 725 2 0 Jan08 ? 00:00:00 [scsi_eh_0] root 726 2 0 Jan08 ? 00:00:00 [usb-storage] root 728 2 0 Jan08 ? 00:00:00 [scsi_eh_1] root 729 2 0 Jan08 ? 00:00:00 [scsi_eh_2] root 1051 1 0 Jan08 ? 00:00:00 /usr/sbin/syslogd -m0 root 1055 1 0 Jan08 ? 00:00:00 /usr/sbin/klogd -c 3 -x root 1094 1 0 Jan08 ? 00:00:00 /usr/sbin/ifplugd -i eth0 -fwI - bin 1102 1 0 Jan08 ? 00:00:00 /sbin/rpc.portmap nobody 1106 1 0 Jan08 ? 00:00:00 /sbin/rpc.statd root 1116 1 0 Jan08 ? 00:00:00 /usr/sbin/inetd root 1126 1 0 Jan08 ? 00:00:00 /usr/sbin/acpid root 1133 1 0 Jan08 ? 00:00:00 /usr/sbin/crond -l10 daemon 1135 1 0 Jan08 ? 00:00:00 /usr/sbin/atd -b 15 -l 1 root 1140 1 0 Jan08 ? 00:00:12 /usr/sbin/nmbd -D root 1142 1 0 Jan08 ? 00:00:00 /usr/sbin/smbd -D root 1144 1142 0 Jan08 ? 00:00:00 /usr/sbin/smbd -D root 1150 1 0 Jan08 ? 00:00:00 /usr/local/sbin/emhttp root 1155 1 0 Jan08 tty1 00:00:00 /sbin/agetty 38400 tty1 linux root 1156 1 0 Jan08 tty2 00:00:00 /sbin/agetty 38400 tty2 linux root 1158 1 0 Jan08 tty3 00:00:00 /sbin/agetty 38400 tty3 linux root 1160 1 0 Jan08 tty4 00:00:00 /sbin/agetty 38400 tty4 linux root 1167 2 0 Jan08 ? 00:00:00 [mdrecoveryd] root 1174 1 0 Jan08 tty5 00:00:00 /sbin/agetty 38400 tty5 linux root 1176 1 0 Jan08 tty6 00:00:00 /sbin/agetty 38400 tty6 linux root 1235 1 0 Jan08 ? 00:00:00 /usr/sbin/ntpd -g -p /var/run/nt root 1255 1 0 Jan08 ? 00:00:00 /bin/bash ./uu root 1256 1 0 Jan08 ? 00:00:00 logger -tunmenu -plocal7.info -i root 1257 1255 0 Jan08 ? 00:00:01 awk -W re-interval -f ./unmenu.a root 6576 1 0 Jan15 ? 00:00:00 udevd --daemon root 25154 1116 0 10:35 ? 00:00:00 in.telnetd: 192.168.10.199 root 25155 25154 0 10:35 pts/1 00:00:00 -bash root 25166 25155 0 10:35 pts/1 00:00:00 ps -ef
  12. Joe, as you know, I've been running 10 preclear cycles on two 2TB Hitachi drives. They are currently at 83% done on cycle 6 post-read @ 170 hrs. The problem is, my telnet windows seem to have stuck or something, because I am no longer getting refreshed on the progress. I checked the read / write columns in myMain under unMenu, and the numbers aren't changing. When I touch the drives it feels like they're still spinning, and unMenu seems to confirm this. This is the past few days in the syslog... Jan 14 02:21:08 Tower kernel: sdb: sdb1 Jan 14 02:21:19 Tower kernel: udev: starting version 130 Jan 14 02:48:36 Tower kernel: sda: sda1 Jan 14 02:48:47 Tower kernel: udev: starting version 130 Jan 15 07:17:50 Tower kernel: sdb: sdb1 Jan 15 07:18:00 Tower kernel: udev: starting version 130 Jan 15 07:49:58 Tower kernel: sda: sda1 Jan 15 07:50:09 Tower kernel: udev: starting version 130 Jan 16 09:15:13 Tower unmenu[1256]: gawk: ./08-unmenu-array_mgmt.awk:115: warning: escape sequence `\'' treated as plain `''
  13. Thanks Joe. It's all new to me, but thanks for the encouragement. lboregard - Just an FYI, you can open multiple sessions to your server remotely and run cycles on all your drives simultaneously.
  14. If you look here the value 253 refers to the drive being completely virgin and brand new. After your first cycle, that value drops to 200 indicating that the drive has no failures, but is no longer new. That SMART report looks clean to me, and I would bet that your second or third cycle will contain no changes. What 2TB drive are you using? I have 2 of them running 10 cycles atm and they are averaging about 28 hours for a full cycle. The old mobo I'm using is only SATA 150.
  15. Okay, this is the first preclear of a Hitachi Deskstar 2TB 7200rpm drive, model HD32000IDK7. Am I correct in assuming this drive is healthy? Why did the value go from 95 up to 100 after the preclear? =========================================================================== = unRAID server Pre-Clear disk /dev/sda = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 42C, Elapsed Time: 27:22:47 ============================================================================ == == Disk /dev/sda has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 19,20c19,20 < Offline data collection status: (0x80) Offline data collection activity < was never started. --- > Offline data collection status: (0x84) Offline data collection activity > was suspended by an interrupting command from host. 52c52 < 1 Raw_Read_Error_Rate 0x000b 095 095 016 Pre-fail Always - 1114112 --- > 1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0 ============================================================================
×
×
  • Create New...