Jump to content

dave_m

Members
  • Posts

    99
  • Joined

  • Last visited

Everything posted by dave_m

  1. So if people using sector 63 and a version prior to 4.7 aren't affected, how about people using 5.x with all disks having partitions starting on sector 64?
  2. I skimmed the syslog too quickly the first time, it looks like the error happens on multiple drives, so they probably are fine. Swapping the drives to a different backplane and reseating the cables are worth trying.
  3. I think these lines in the syslog may indicate the problem, but someone with more experience may need to confirm: Jan 13 17:43:18 Hercules kernel: sd 0:0:6:0: attempting task abort! scmd(d1267b40) Jan 13 17:43:18 Hercules kernel: sd 0:0:6:0: [sdg] CDB: cdb[0]=0x28: 28 00 00 5b 0c 40 00 04 00 00 Jan 13 17:43:18 Hercules kernel: scsi target0:0:6: handle(0x0010), sas_address(0x5001e6739eda2ff0), phy(16) Jan 13 17:43:18 Hercules kernel: scsi target0:0:6: enclosure_logical_id(0x5001e6739eda2fff), slot(16) Jan 13 17:43:19 Hercules kernel: sd 0:0:6:0: task abort: SUCCESS scmd(d1267b40) Those lines repeat every minute or so during the parity check, but because they occur on multiple drives it's probably not the drives themselves. I'm not sure if the error is pointing at cable, power or controller though. I doubt it's related to upgrading unraid to RC10 unless other people have starting seeing the same error.
  4. Release 5.0 and use a 64-bit kernel for the next release (5.1, 6.0...) I think there will be plenty of people happy to upgrade to an "official release" from 4.x, as there have been bug fixes and new hardware supported like LSI cards and 3+ TB drives. If the 64 bit kernel solves the write problem and is the next priority, then hopefully the X9SCM-F users will be satisfied as well.
  5. Just another confirmation for anyone considering this, I plugged it into a unRaid 4.7 system and it was recognized immediately and I am now preclearing the 2 drives attached to it.
  6. I've been running 5.0 beta12a for at least a year. It seems like the LSI bug has been fixed, so is there any reason I should or should not try upgrading to a more recent version? (I boot off an SD card in a USB adapter, so upgrading / downgrading is pretty easy.) All my server does is share files using Samba, so no plugins or anything to worry about.
  7. Good to know, I was planning on using the script, but didn't know it might need modified.
  8. Depends on what the drive was primary used for. I have RMA'd a parity drive, and I wouldn't hesitate to RMA a media drive from unraid or a data drive from my mythtv server. A backup drive or OS drive is a completely different story unless I could wipe it first.
  9. That sounds like the problem spinning up the drives all the LSI owners had with beta 13 & 14, which is why beta12a is recommended. Which version did you have problems with?
  10. I've been using this motherboard for a while, and while I can't remember the exact steps, I didn't have to do anything unusual to boot from the USB thumb drive. Normally you want to set the boot options in the BIOS after installing all the drives.
  11. I'm pretty sure the BR10i doesn't support anything larger than 2TB. I think the BR10i 3TB support refers to allowing multiple smaller drives to appear as one 3TB drive, which isn't really applicable to most unraid usage. That said, I have the BR10i and it works great with my 2TB drives and beta12a.
  12. Yes, I'm using the case. It has 7 3.5" internal bays, one 3.5" external and then three 5" bays as well as space for 2 more internal 3.5" drives on the crossbar. If you use a 3x5 drive cage then it'll hold 15 drives. I'm using a cheaper 3x4 drive cage. The airflow is adequate, but there are a couple places that will be better for green drives.
  13. They don't need to be connected to run the server. Change the bios and test to see if things are working after removing them, as this test should be closer to the actual usage.
  14. If the USB keyboard / monitor is the issue, why have them connected at all?
  15. That's a bummer for sure as green drives were perfect for most of what unraid does. The PSU impact is real, as even smaller servers could need a bigger one if green drives were swapped for 7200RPM drives. They will need more/better cooling, which means either louder or more expensive fans. The green drives were typically cheaper than 7200RPM drives as well, so I'm not seeing any upside here for unraid. For a home user with a drive or two in a desktop, sure 7200RPM makes sense, although I'd personally recommend a SSD and a green storage drive over 2 7200RPM drives.
  16. The newest firmware (3.5.5 as of Jan 2012) is a must as older versions could result in some serious problems requiring a complete OS reinstall. This drive is my boot drive on a MythTV system right now, no problems for me yet.
  17. Disk sdc was reporting errors at 14:15, might be worth looking at the SMART status for it.
  18. Are all your drives almost full? There have been reports of extremely slow access when disks don't have much free space left.
  19. I'm not quite sure what you mean here. Are you saying you are using windows explorer on your local PC to copy data from one unraid drive to another? I wouldn't expect very good performance for that scenario, even on gigabit ethernet. The total time to preclear the drives sounds about right, even though the reported speed on the final step seems slow.
  20. The motherboard doesn't have onboard video, so now you're talking about adding a video card as well as a NIC. The ECS A885GM-A2 (V1.1) AM3 AMD 880G SATA 6Gb/s ATX AMD Motherboard could be an option if an ATX motherboard will work.
  21. Running the cache_dirs script is what made mine quit spinning up all drives when opening the initial web page. If it's not running the web page takes 20 seconds to load, and all the drives are spun up. If it is running, it opens immediately without spinning everything up. I am using a LSI (IBM BR10i) controller if that matters.
  22. The LSI based cards in this thread might be an option. They haven't been used as long with unRaid, but I'm using one with 5.0beta12a and it's been working well.
  23. Interesting, I thought I was seeing this same thing for a while using beta12a, but my problem seems to have gone away. It would take longer than usual to open the main unRaid web page the first time after authenticating, and then all the drives would be spun up. Unfortunately I don't recall doing anything to resolve the issue and I can't reproduce it now.
  24. Need a better idea of what hardware you have, and the entire syslog if possible. The snippet you posted shows problems that could be controller, cable or hard drive.
×
×
  • Create New...