Jump to content

FreeMan

Members
  • Posts

    1,520
  • Joined

  • Last visited

Everything posted by FreeMan

  1. It's also reporting nearly 900 errors, so it's definitely time to stop relying on it until further testing can be completed. Well, I've got a new 4TB on the way, and I've managed to get all the data off of it, so I'm part of the way there. I'd love to RMA it, but it's about 5 years old now, and I'm pretty sure it only had a 1 yr, maybe 2 yr warranty. Thanks for the feedback
  2. I have a USB drive that I'd used to set up an older version of unRAID (4.x or 5.x, I don't recall). Can I safely wipe the files from it, then extract the current v6 beta to it, or do I need to reformat & re-run make_bootable? EDIT: Also, where do I go for a good primer on v6? I know there are a lot of new features - dockers, plugins, VM, multiple file systems. Where the source for what's what, and the advantages of one choice over another? I've taken a brief tour through the Wiki, but there are a lot of "to be written" sections still.
  3. Based on the attached syslog, it looks like it's time to replace drive 5. Anything I should check on or do before doing so? A correcting parity check just finished because the server didn't shut down cleanly, and it fixed 4 errors. syslog-2014-12-09.zip
  4. I made this change: #shabin="/usr/bin/sha256deep" shabin="/usr/local/bin/sha256deep" at line 136 and now it's running a treat! Except that my test folder was on a drive with 0 bytes free and there wasn't enough room to write the extended attributes to disk. A touch of file rearranging will fix that right up. Thanks for the script and for the tips to get it working.
  5. That's what I installed following your instructions in OP. Then, when it didn't work, I downloaded the 64-bit hashdeep from your OP link. I'll have to take a look through the unMenu md5deep package to see if there's a 32-bit hashdeep in there, or, maybe, alter the script to call one of the hashes that is installed. That'll have to be after work, though...
  6. Well, that answers quite a few questions. I've been meaning to set up a 6.0beta test system, guess I'd better get on it...
  7. OK, started digging through the script itself. which sha256deep yields nothing, so it executes installpkg hashdeep-4.4-x86_64-1rj.txz > /dev/null and still doesn't work (even though hashdeep-4.4-x86_64-1rj.txz is in the same directory as bitrot.sh) I tried executing root@NAS:/boot/Scripts# installpkg hashdeep-4.4-x86_64-1rj.txz Verifying package hashdeep-4.4-x86_64-1rj.txz. expr: syntax error /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 439: echo: write error: No space left on device /sbin/installpkg: line 442: echo: write error: No space left on device cat: write error: No space left on device Installing package hashdeep-4.4-x86_64-1rj.txz: PACKAGE DESCRIPTION: /sbin/installpkg: line 508: echo: write error: No space left on device /sbin/installpkg: line 509: echo: write error: No space left on device /sbin/installpkg: line 510: echo: write error: No space left on device /sbin/installpkg: line 511: echo: write error: No space left on device /sbin/installpkg: line 516: echo: write error: No space left on device /sbin/installpkg: line 521: echo: write error: No space left on device WARNING: Package has not been created with 'makepkg' /sbin/installpkg: line 530: echo: write error: No space left on device Package hashdeep-4.4-x86_64-1rj.txz installed. Where is this trying to install to that I'm running out of disk space? I've got 3.7GB free on my flash drive. Am I out of space on the virtual drive? The server has been up for 132 days, and I've got 8GB of RAM installed. It looks like I've got about 520MB RAM used, 4.14GB cached and 2.75GB free.
  8. I've downloaded gcc and md5deep. I set md5deep to reinstall on re-boot, but not gcc. Since md5deep downloads and compiles on reboot, won't I need to set gcc to download & install itself, too? EDIT: I'm getting the same issue as Duppie: root@NAS:/boot/packages/md5deep/hashdeep# bitrot.sh -a -p "/mnt/user/Home Video" bitrot, by John Bartlett, version 1.0 Error: The hashdeep package has not been installed. root@NAS:/boot/packages/md5deep/hashdeep# which hashdeep /boot/packages/md5deep/hashdeep/hashdeep Running 5.0.4, I installed md5deep via unMenu. Any thoughts?
  9. For the last several weeks, every drive in my array has been spinning. I've made no setup changes for quite a while, so I'm not sure what the issue is. I have SAB, SB, & CP installed to my cache drive. I run torrents, but they are all feeding to/from the newest disk (not many old things are being seeded). Running these two commands: /usr/bin/lsof |grep "/mnt/user" /usr/bin/lsof |grep "/mnt/disk" Reveal only a few files open on my data drives. I'm running cache_dirs, so that should eliminate spinning disks up for directory listings. I've confirmed that the system default spin-down is 1 hour and that all disks are using the system default I noticed that it's set to use spin-up groups, and that most of the drives are in their own spin-up group (a few are set to 'blank'). I can manually spin down disks, and they will stop, but within a couple of minutes, they spin up again. I'd include a syslog, but I have three, syslog, syslog.1, syslog.2 and they're dated on 5 Aug, 6 Aug & 15 Aug. It seems that log rotation isn't working properly, either. Version: 5.0.4 Uptime: 132 days, 8 hours, 55 minutes I'm sure I should update to 5.0.6 (copy bzimage and bzroot to the root of my flash drive & restart the server, right?). Any other tips or suggestions? Thanks, FreeMan
  10. ooh, owncloud sounds interesting! Anybody know if it will run on 5.0.6?
  11. I've been using Dropbox to sync camera phone shots for years. I'm not sure what subscription you're talking about, but I've never given the Dropbox folks a dime. I did have to go into my drobpox\Camera Uploads directory and clear out about 3 years worth of old pictures to free up some space, but that's about it.
  12. I just picked up a couple of refurb 300GB 10000RPM drives that I intend to use as system drives in my Windows machines, but since they're refurb & only have a 90 day warranty, I want to use preclear to give them a good workout. I've searched this thread, and it appears that preclear will work just fine on a drive mounted via a SATA->USB dock, but I'm getting an error root@NAS:/boot/Scripts# preclear_disk.sh -l ====================================1.15 Disks not assigned to the unRAID array (potential candidates for clearing) ======================================== /dev/sdn = usb-JMicron_USB_to_ATA_ATAPI_Bridge_152D203380B6-0:0 So the drive is recognized with no issues root@NAS:/boot/Scripts# preclear_disk.sh -c 10 /dev/sdn Sorry: Device /dev/sdn is not responding to an fdisk -l /dev/sdn command. You might try power-cycling it to see if it will start responding. This is an older 'dock' that I borrowed from a buddy. it's a Rosewill brand (from Newegg) and is actually just a power brick & connector and a USB-SATA cable, so there's nothing 'dockey' about it. I've got the drive sitting on its anti-static bag on top of the case, and it's recognized by unRAID as soon as I power it on, but preclear doesn't seem to like it. Any thoughts or suggestions?
  13. I just upgraded my parity from a 3TB drive to a 4TB. The 4 precleared & parity has been rebuilt & checked just fine. I went to preclear the 3TB, and I get this output from preclear_disk.sh before starting the clear: root@NAS:/boot/scripts# preclear_disk.sh -n /dev/sdm Pre-Clear unRAID Disk /dev/sdm ################################################################## 1.15 Model Family: Western Digital Caviar Green (AF, SATA 6Gb/s) Device Model: WDC WD30EZRX-00D8PB0 Serial Number: WD-WMC4N0533589 LU WWN Device Id: 5 0014ee 603d7fd9c Firmware Version: 80.00A80 User Capacity: 3,000,592,982,016 bytes [3.00 TB] WARNING: GPT (GUID Partition Table) detected on '/dev/sdm'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdm: 3000.6 GB, 3000592982016 bytes 256 heads, 63 sectors/track, 363376 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdm1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. ######################################################################## invoked as ./preclear_disk.sh -n /dev/sdm ######################################################################## (MBR 4k-aligned set. Partition will start on sector 64 for disks <= 2.2TB and sector 1 for disks > 2.2TB) Though I've upgraded my parity before (from 2 to 3TB), I don't recall seeing a message like this, though it may have been there. What concerns me is Partition 1 does not start on physical sector boundary. Is that an issue to worry about? I'm running unRaid v5.0.4. TIA FreeMan
  14. I'm going to go ahead and run the preclear, but I get this: Disk /dev/sdm: 3000.6 GB, 3000592982016 bytes 256 heads, 63 sectors/track, 363376 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdm1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. ######################################################################## invoked as ./preclear_disk.sh -n /dev/sdm ######################################################################## (MBR 4k-aligned set. Partition will start on sector 64 for disks <= 2.2TB and sector 1 for disks > 2.2TB) Note that this was previously a parity disk. Is Partition 1 does not start on physical sector boundary anything to worry about considering (MBR 4k-aligned set. Partition will start on sector 64 for disks <= 2.2TB and sector 1 for disks > 2.2TB)?
  15. I'm running into the same issue as thegizzard. I'm finishing up a parity check after replacing my 3TB parity drive with a new 4TB parity drive. The 3TB is not assigned to the array, but preclear_disk.sh is reporting "no un-assigned disks detected". I know the disk is not in use, I know it's assigned as /dev/sdm after the last reboot, is there anything I should do to attempt to get preclear to recognize it as not in the array, or should I just fire off the preclear at my own risk? TIA FreeMan
  16. The writing to a drive will use most of your free memory in the disk buffer cache, displacing anything previously cached. Any other activity on the server will therefore need to spin up the physical disks to read their contents. It could therefore be anything, even a PC on your lan attempting to update its directory listings. (don't sweat it) Joe L. Ah. I have Cache Dirs running, so every 15 min or so, it's scanning the full directory tree to keep it in the (temporarily) non-existent cache memory. Guess that'll do it. Odd, don't remember having noticed that before, but no big deal.
  17. Love the script, thanks for writing it! I've used it on every disk in my machine (9 data, 1 parity, 1 cache) Currently preclearing my first 4TB drive, and it's actually zipping along quite nicely. 18 hours and is on post-read (currently ~150 MB/s) of cycle 1. Just a thought for future revisions: Instead of (or in addition to) saying "DONE" next to a step when it's completed, could you put the elapsed time that step took? Or, alternatively, just the elapsed time from start would be fine - I can do the math if that's easier for the script. thanks again, JoeL! FreeMan Edit: Also, anything in version 1.14* that might keep all drives in the array spinning? I don't recall having seen this behavior in the past, but I can't identify anything else going on 'round here that would be keeping every drive busy. * Yes, I discovered there is now a 1.15, but I'm not running a 64-bit OS yet, so it looks like 1.14 will do for this run. I'll update as soon as it's finished.
  18. http://www.newegg.com/Product/Product.aspx?Item=N82E16822148844&nm_mc=EMC-IGNEFL062014&cm_mmc=EMC-IGNEFL062014-_-EMC-062014-Index-_-InternalHardDrives-_-22148844-L0E Promo Code: EMCPDHP35 27% bad reviews on Newegg, though, anyone here have experience with it?
  19. Winner, winner, chicken dinner!!! Thanks Rob, that was the ticket. Despite wearing contacts, we all need 4 eyes. Thanks - don't know why I didn't think of that...
  20. I'm running a Gigabyte F2A88XM-D3H mobo with an AMD A6 5400K CPU. I found this link http://ubuntuforums.org/showthread.php?t=2201555 and this one http://www.phoronix.com/forums/showthread.php?96788-Gigabyte-F2A88XM-D3H-AMD-A88X/page3 that both indicated that I needed to use modprobe it87 force_id=0x8728. Having done that, sensors now reports: k10temp-pci-00c3 Adapter: PCI adapter MB Temp: +9.4 C (high = +70.0 C) (crit = +80.0 C, hyst = +79.0 C) it8728-isa-0228 Adapter: ISA adapter in0: +1.00 V (min = +0.00 V, max = +3.06 V) in1: +1.49 V (min = +0.00 V, max = +3.06 V) in2: +2.02 V (min = +0.00 V, max = +3.06 V) in3: +2.02 V (min = +0.00 V, max = +3.06 V) in4: +2.00 V (min = +0.00 V, max = +3.06 V) in5: +2.22 V (min = +0.00 V, max = +3.06 V) in6: +2.22 V (min = +0.00 V, max = +3.06 V) 3VSB: +3.36 V (min = +0.00 V, max = +6.12 V) Vbat: +3.02 V fan1: 1917 RPM (min = 10 RPM) fan2: 0 RPM (min = 10 RPM) fan3: 0 RPM (min = 0 RPM) fan4: 0 RPM (min = 0 RPM) fan5: 0 RPM (min = 0 RPM) temp1: +29.0 C (low = +127.0 C, high = +127.0 C) sensor = thermistor temp2: -8.0 C (low = +127.0 C, high = +127.0 C) sensor = thermistor temp3: +30.0 C (low = +0.0 C, high = +60.0 C) sensor = Intel PECI intrusion0: ALARM I ended up with this in my sensors.conf: # sensor configuration chip "k10temp-*" #chip "it8728-isa-0028" label temp1 "MB Temp" label temp3 "CPU Temp" With the first "chip" line in effect, I get the somewhat ridiculously low reading in the single digits © as a temp being displayed in Dynamix. If I uncomment the 2nd "chip" line, instead, I get no readings showing up. The values from the it8728 chip seem to be accurate, or at least reasonable, how do I get them to display instead of the values from the k10temp chip? (As a side note, how do I get a nicely formatted word as a link, instead of displaying the full link? I know how to do <a href="url">text</a> in HTML, but how do I do it in BBCode?)
  21. My guess is you received a drive with some problems, that had been 'repaired' by clearing the SMART tables, masking the problems, so the first test runs re-exposed the problems. The first 2 Preclears seemed to have dealt with most of them, and the third looks much better, but I'm not confident you've uncovered ALL of the marginal sectors yet. I'd run 2 or 3 more Preclears, and I'd only feel more confident if I had at least 2 passes with NO further changes, no more Current Pending sectors at any phase, no additional Uncorrectables, no additional Reallocated sectors. If interested and have time, you might also try a full badblocks run with the -w option. The other possibility is that it's a bad drive, and it's going to continue getting worse. I suspect that after another Preclear, you will either know it's bad or may decide that you aren't willing to trust the drive, even if it starts behaving, has clean reports. I would suggest keeping all the pre-clear SMART results, just in case Seagate questions why you want to return a drive they just shipped you. From what I've read around here, they don't normally question it, but returning a just shipped drive might raise an eyebrow somewhere. What's a few K of disk space for a few weeks, just to be on the safe side...
  22. I've been having other issues with the box since upgrading from 4.7 to 5.0.4 and adding 2 drives, so I'll see if I can get those sorted before totally giving up on this drive. On the bright side, I picked up the drive locally at Fry's, so it shouldn't be an issue walking back in to return it as DOA. The drive was on sale for $100 - if it hadn't been just before Christmas, with lots o' cash already spent on presents, I'd have picked up several. EDIT: Seems that it was an issue with the SATA controller card I also purchased. The drive's fine, preclear has finished and parity is now syncing.
  23. Thanks for your input, Gary & ProStuff! I ended up grabbing the ThermalTake PSU out of a dead machine I have sitting here, and the difference is like night and day! Interesting, since the CX series Corsairs are listed in the first couple of posts as recommended good PSUs. I'm not here to argue the point with you, since I'm the one asking for info, but is it worthwhile having a discussion amongst those who do know about whether those are good recommendations to leave up front there? If there are design issues, maybe it's time to update the recommendations...
  24. The pre-read hung at 99%. I rummaged through the end of the syslog, and this is what I found: Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] CDB: Dec 29 10:36:28 NAS kernel: cdb[0]=0x88: 88 00 00 00 00 01 5d 50 78 00 00 00 00 08 00 00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Unhandled error code Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Dec 29 10:36:28 NAS kernel: Result: hostbyte=0x04 driverbyte=0x00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] CDB: Dec 29 10:36:28 NAS kernel: cdb[0]=0x88: 88 00 00 00 00 01 5d 50 78 00 00 00 00 08 00 00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Unhandled error code Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Dec 29 10:36:28 NAS kernel: Result: hostbyte=0x04 driverbyte=0x00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] CDB: Dec 29 10:36:28 NAS kernel: cdb[0]=0x88: 88 00 00 00 00 01 5d 50 78 00 00 00 00 08 00 00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Unhandled error code Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Dec 29 10:36:28 NAS kernel: Result: hostbyte=0x04 driverbyte=0x00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] CDB: Dec 29 10:36:28 NAS kernel: cdb[0]=0x88: 88 00 00 00 00 01 5d 50 78 00 00 00 00 08 00 00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Unhandled error code Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Dec 29 10:36:28 NAS kernel: Result: hostbyte=0x04 driverbyte=0x00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] CDB: Dec 29 10:36:28 NAS kernel: cdb[0]=0x88: 88 00 00 00 00 01 5d 50 78 00 00 00 00 08 00 00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Unhandled error code Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] Dec 29 10:36:28 NAS kernel: Result: hostbyte=0x04 driverbyte=0x00 Dec 29 10:36:28 NAS kernel: sd 7:0:0:0: [sdg] CDB: Dec 29 10:36:28 NAS kernel: cdb[0]=0x88: 88 00 00 00 00 01 5d 50 78 00 00 00 00 08 00 00 There were a lot of lines like that, and the ones I checked were all the same. Can anyone interpret that for me? I'm not sure what it means beyond, 'not good'.
×
×
  • Create New...