bsim

Members
  • Posts

    191
  • Joined

  • Last visited

Everything posted by bsim

  1. Discounts are usually one drive per house per person. Be creative. It's always best to use a faster parity drive, but without the overhead of the raid, you probably won't notice much of a difference until you get to SSD. In general, Hitachi has always been a more dependable drive (as both the Google study and the blazer study). But since I thought I heard that they were bought out by Seagate, perhaps they will all just become the same bland brand. One big reason for having a raid array! Would love for unraid to pick up P+Q redundancy.
  2. I've had several 2 and 3 tb hitachi's first running in individual systems then migrated into my unraid build for (total 5 or 6 years now) running nearly continuously then entire time and have yet to have to replace one. I have replaced a 3TB seagate. All drives running in mid 20's ©. From all of my research, hitachi comes out way ahead of seagate and seagate (I have a few) comes way ahead of western digital (I will NEVER buy). http://blog.backblaze.com/2014/01/21/what-hard-drive-should-i-buy/ thanks for the catch on the Hitachi, ordered a few!
  3. The lowest for amazon. http://camelcamelcamel.com/Seagate-Desktop-3-5-Inch-Internal-ST4000DM000/product/B00B99JU4S Right now Newegg is 150 as well USD $15 off w/ promo code EMCPFPB42, ends 4/14 http://www.newegg.com/Product/Product.aspx?Item=N82E16822178338
  4. Is there anything special to upgrade from 5.0 to 5.0.4? Can I jump directly to it by copying the bzimage and bzroot files over or is there anything that I have to be careful of?
  5. After doing some rearranging and a adding a few drives (for space), the issue still occurs. Free space is not the problem with how much space is available on both the cache drive and a few of the drives in the array. Attached is a current drive listing of my server. I do notice that just before the error occurs when copying files to the server, any reads (playing movie on local system) locks until the error fully times out. I'm still running a 5.0 stock 4GB memory limit, the error occurs without syslog events, the network card had been swapped for an Intel card, still no go. I'm still leaning towards a samba/unraid bug. Is there anything more that I can do to diagnose this? Is there a way to increase logging levels in unraid?
  6. Server motherboard, has both ide and SATA ports, IDE is diabled and attached is sata detection info. the drives i tested were from motherboard attached sata drives cfdisked/mke2fs'ed seperatly from the array. The motherboard has the latest bios (10-22-09).
  7. The cache drive (/mnt/cache), is outside of unraid domain (until the mover kicks in). Just to be sure I've also tested on a seperate drive partitioned and formatted in ext3...still 40MB/s...Anyone know where the bottle neck is? Is it just the drive speed? (SSD is my next step)
  8. I am running 64Bit PCI-X 8 port cards on a server board and a 7200RPM sata cache drive. Right now i at max get 40MB/s doing a parity check or copying locally outside of unraid (MC). This is about the same limit if I copy through the PCI bus or directly through the motherboard sata ports. From what I know, PCI-X 64Bit should have a limitation of 1064 MB/s, the 7200RPM sata drive should be around 100MB/s the gigabit network throughput gets 54MB-UP/83MB-DOWN to the motherboard cache drive directly. Is this as expected, am I missing something or is something else creating a bottleneck?
  9. Figured it out...it was the onboard controller, removed the drive from the array, plugged it into an external on my workstation and heard a slight clunk clunk...nice part is that the drive was under hitachi's 5 year warranty...thanks for the help.
  10. Only disk 9 is red balled, no other errors. does the errors show an actual sector error or is it more likely an actual onboard hard drive error?
  11. Oct 21 03:51:05 UnRAID logger: mover finished Oct 21 09:44:47 UnRAID kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Oct 21 09:44:47 UnRAID kernel: ata8.00: failed command: READ DMA EXT Oct 21 09:44:47 UnRAID kernel: ata8.00: cmd 25/00:08:c8:71:32/00:00:29:00:00/e0 tag 0 dma 4096 in Oct 21 09:44:47 UnRAID kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Oct 21 09:44:47 UnRAID kernel: ata8.00: status: { DRDY } Oct 21 09:44:47 UnRAID kernel: ata8: hard resetting link Oct 21 09:44:52 UnRAID kernel: ata8: link is slow to respond, please be patient (ready=0) Oct 21 09:44:57 UnRAID kernel: ata8: SRST failed (errno=-16) Oct 21 09:44:57 UnRAID kernel: ata8: hard resetting link Oct 21 09:45:03 UnRAID kernel: ata8: link is slow to respond, please be patient (ready=0) Oct 21 09:45:07 UnRAID kernel: ata8: SRST failed (errno=-16) Oct 21 09:45:07 UnRAID kernel: ata8: hard resetting link Oct 21 09:45:13 UnRAID kernel: ata8: link is slow to respond, please be patient (ready=0) Oct 21 09:45:42 UnRAID kernel: ata8: SRST failed (errno=-16) Oct 21 09:45:42 UnRAID kernel: ata8: limiting SATA link speed to 1.5 Gbps Oct 21 09:45:42 UnRAID kernel: ata8: hard resetting link Oct 21 09:45:47 UnRAID kernel: ata8: SRST failed (errno=-16) Oct 21 09:45:47 UnRAID kernel: ata8: reset failed, giving up Oct 21 09:45:47 UnRAID kernel: ata8.00: disabled Oct 21 09:45:47 UnRAID kernel: ata8: EH complete Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Unhandled error code Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Oct 21 09:45:47 UnRAID kernel: Result: hostbyte=0x04 driverbyte=0x00 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] CDB: Oct 21 09:45:47 UnRAID kernel: cdb[0]=0x28: 28 00 29 32 71 c8 00 00 08 00 Oct 21 09:45:47 UnRAID kernel: end_request: I/O error, dev sdf, sector 691171784 Oct 21 09:45:47 UnRAID kernel: md: disk9 read error, sector=691171720 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Unhandled error code Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Oct 21 09:45:47 UnRAID kernel: Result: hostbyte=0x04 driverbyte=0x00 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] CDB: Oct 21 09:45:47 UnRAID kernel: cdb[0]=0x28: 28 00 df 09 08 f8 00 00 08 00 Oct 21 09:45:47 UnRAID kernel: end_request: I/O error, dev sdf, sector 3741911288 Oct 21 09:45:47 UnRAID kernel: md: disk9 read error, sector=3741911224 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Unhandled error code Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Oct 21 09:45:47 UnRAID kernel: Result: hostbyte=0x04 driverbyte=0x00 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] CDB: Oct 21 09:45:47 UnRAID kernel: cdb[0]=0x2a: 2a 00 29 32 71 c8 00 00 08 00 Oct 21 09:45:47 UnRAID kernel: end_request: I/O error, dev sdf, sector 691171784 Oct 21 09:45:47 UnRAID kernel: end_request: I/O error, dev sdf, sector 691171784 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Unhandled error code Oct 21 09:45:47 UnRAID kernel: md: disk9 write error, sector=691171720 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Oct 21 09:45:47 UnRAID kernel: Result: hostbyte=0x04 driverbyte=0x00 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] CDB: Oct 21 09:45:47 UnRAID kernel: md: recovery thread woken up ... Oct 21 09:45:47 UnRAID kernel: cdb[0]=0x28: 28 00 29 33 6e e8 00 00 08 00 Oct 21 09:45:47 UnRAID kernel: end_request: I/O error, dev sdf, sector 691236584 Oct 21 09:45:47 UnRAID kernel: md: disk9 read error, sector=691236520 Oct 21 09:45:47 UnRAID kernel: md: recovery thread has nothing to resync Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Unhandled error code Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Oct 21 09:45:47 UnRAID kernel: Result: hostbyte=0x04 driverbyte=0x00 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] CDB: Oct 21 09:45:47 UnRAID kernel: cdb[0]=0x2a: 2a 00 df 09 08 f8 00 00 08 00 Oct 21 09:45:47 UnRAID kernel: end_request: I/O error, dev sdf, sector 3741911288 Oct 21 09:45:47 UnRAID kernel: end_request: I/O error, dev sdf, sector 3741911288 Oct 21 09:45:47 UnRAID kernel: md: disk9 write error, sector=3741911224 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Unhandled error code Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] Oct 21 09:45:47 UnRAID kernel: Result: hostbyte=0x04 driverbyte=0x00 Oct 21 09:45:47 UnRAID kernel: sd 10:0:0:0: [sdf] CDB: Oct 21 09:45:47 UnRAID kernel: cdb[0]=0x2a: 2a 00 29 33 6e e8 00 00 08 00 Oct 21 09:45:47 UnRAID kernel: end_request: I/O error, dev sdf, sector 691236584 Oct 21 09:45:47 UnRAID kernel: end_request: I/O error, dev sdf, sector 691236584 Oct 21 09:45:47 UnRAID kernel: md: disk9 write error, sector=691236520 Oct 22 03:40:01 UnRAID logger: mover started
  12. UPDATE: The tweaks didn't do anything for the problem. Would the drives being not insanely loaded with free space still be an issue with a cache drive? Attached is an image of my free space situation if anyone was interested. Is there anything else I can do? UPDATE: I found that moving large amounts of files around on the same share can cause the issue.
  13. just got the last post, I will give it a try. In the meantime, I added a 250GB cache drive directly connected to the MB sata, and freed up almost a terabyte over the array. Even with the cache drive (empty), the system still gives me the error. I will give the tweaks a try to see if i can reproduce the issue.
  14. Three alternatives to mirroring that I'm researching that monitor the cache drive in real time and sync it to the mirror cache drive. unison, lsyncd, and csync2....csync2 seems much higher level (clustering), unison seems to be better for client/server... anyone had any experience with any of these apps?
  15. any guess on how far back? I see the release was going to go slack 14 with the new samba too.
  16. Are there any third party drivers/utilities that you've heard of that would keep a real time mirror of a drive in the background that could just watch the cache drive?
  17. I was more suggesting using mdadm to create a software raid1 array (primarily for cache)...i figured the hardware approach wouldn't be a big deal being that the system really doesn't know what the hardware controller has up it's sleeve. I know that linux can have nested arrays...is there a way to tell unraid to just use an array for a cache drive? As a side note, I've seen on the forum that emhttp isn't ran on full slackware installs...so there must be a manual way to modify unraid configs without the web interface...right?
  18. Can Unraid be forced to mount an MD (raid) either as a disk, cache drive or parity drive?
  19. I have tried almost all vm hipervisors , exept kvm and hyper-v. Only xen on ubuntu 12.4 dom0 worked. As for pci access, I am not sure what you asked there. When you do pci pass through to a vm it should be a dedicatd resource, nothing else can access it, not even dom0. Also, I am not sure but I neve heard pci cards that prevent pass through. Isn't that what pass through is all about? Using hardware that not supported by host but supported by vm os? Sent from my SGH-T889 using Tapatalk 4 I took a week researching the motherboard, and a few dozen posts to an esxi forum and found that the motherboard was made at a time that virtualization was still being standardized and the mb's bios virtualization features are a bit half baked. xen must use a workaround for hardware dedication. There is a way to force the hardware virtualization work around in esxi as well, but there were quite a few drawbacks for doing it so I ended up giving up. The site that explains the subject with xen and the mb's half baked AMD-V... http://en.wikipedia.org/wiki/Hypervisor
  20. i'm only thinking out loud about the options that solve my end problem. I would like to solve the issue with unraid, but without any other options to tell what the problem actually is, I'm looking at the bigger picture. Is there anything else i can do?
  21. vl1969: The motherboard does support some virtualization features, but for my sata cards to be directly assigned to a virtual machine, the function isn't supported. I had intended to run vmware esxi 5.1. Accessing pci devices with xen, is this a shared resource or is it dedicated access (i would think that this would have to be a shared). Barziya: From what i've read, Unraid can be ran on top of a full slackware 64bit installation. There are quite a few features that I would find useful that aren't in the appliance-like unraid build. henris: Is there a way to directly determine if this is the cause? The problem did occur before the space got this low (i've been doing some direct drive to drive transfers). The lowest is 100GB, the largest is around 250GB. The files that can cause the error can be as low as 100'M'B.
  22. is there any logging level that would show that type of error occurring? I understand why the free space is necessary, but the error'ing out occurs on files as small as a 100MB. all 12 of my drives have at least 100GB free, i have it set to save to the drive with the most amount free space first. At this point, I am putting in a mirrored 256 GB cache...i've heard of people saying that a cache drive bypasses a lot of their issues, and the only reason i didn't have a single cache drive before is because of the inherent danger to the most valuable (most recent) files being lost...very unacceptable. After getting to that point, with how much system resources i have and the lack of virtualizing, i will probably go full slackware64 14.1 and add a video/sound card with xbmc....can't wait for xbmc's next release to replace plex with dynamic streaming! For reference, the system motherboard is a SuperMicro H8DME-2.
  23. Are there any dependencies that unraid uses that are part of the slackware installation that unraid would have issues with (samba, emhttp, specific bins...)? Is there any way to raise the unraid logging level to determine the specific error? Is there any way to escalate my problem?
  24. Not the NIC, ...disabled both nforce nics on board, installed an intel pro gt gigabit card, system came right up without any problems on the network, attempted to copy files and same network name is no longer available. BTW, i am still under the forced 4GB limit. Is there a way to increase the unraid log level to show what is actually going wrong? things i may try... actually removing down to 4gb of ram, installing full slackware 64bit install anything else i can try? Being that i have a pro license, is there any way to escalate my issue?
  25. limiting to 4GB does not change anything with the error. append mem=4095M initrd=bzroot I will try an alternate network card to verify that it is not the card, but i am still leaning towards samba/unraid bug. Also, i've heard of problems that occur when free space becomes lower (i don't quite think i'm low enough with the size drives i have. Does anyone know the symptoms and what are the underlying reasons for the issues?