dangil

Members
  • Posts

    55
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dangil's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Hi everyone I am running stock 5.0.5 without any plugins, on 9 HDs (mixed 2tb, 1.5tb and 1tb) during a parity check, I found it odd that the reported number of reads for each HD vary wildly between HDs of the same size one 1TB HD had 10x more reads than the other. 100gb free on one, 70gb free on the other I imagine larger HDs must have more reads on a parity check, but same sized HDs should have the same number of reads right? all HDs are seagate, but their models vary both 1tb are attached to the same sas 1068-e controller can anyone explain this behavior?
  2. Hello everyone My X8SIL-V is running bios version 1.0c There is a newer 1.2a bios but I can't find a changelog or list of fixed issues anyone know what's changed between those versions? is it worth the upgrade? I am having no issues whatsoever with my server, but I like to keep things updated
  3. Tom, perhaps you could build the other modules from this driver, from the kernel source itself like mptctl.ko so we can use lsiutil to monitor the board thanks
  4. I see.. thanks for the answer Tom I will try to motivate myself to create a package for this driver
  5. LSI provides an updated DKMS based, mptsas driver source (http://www.lsi.com/products/storagecomponents/Pages/LSISAS1068E.aspx) current unraid version is 3.04.20 latest LSI version in the 3.x series is 3.28.00 latest LSI version in the 4.x series is 4.28.00 I don't know how hard is it to build these and what benefits could be gained with an updated driver. but I think a LSI provided driver is better than a generic one also, how hard would it be to build a addon with this kernel module to be installed upon boot time, without having to involve a new unraid release?
  6. so far so good no crashes with rc4 clean install, no plugins or mods as always if it happens again I will report
  7. thanks Tom! updating to rc-4 to try this fix. will report back
  8. an update since I started unmounting disk6 (connected to the PCI-E Sata Controller) manually, everything is running smoothly. no more crashes I can press stop, unraid will unmount the disks connected to the onboard sata, and stop the array
  9. currently using 5.0-RC3 I am more interested in understanding when the array will or won't autostart, based on the setting, and based on different events that might happen with the array in parallel, I've seen events when the array started automatically, but the setting was NO, so I was wandering if there is any known bugs. if not, I will do more testing
  10. What is the intended behavior of this setting besides starting up the array automatically upon boot ? is there any circunstance when this setting will revert back to NO ? or is there any circunstance when the array will start even if this setting is set to NO ? or vice-versa, the array wont't start even if it is set to YES ? is there a bug that displays the wrong info on this setting ?
  11. Since beta6a, until rc3, I have a intermittent issue while stopping the array. umount crashes the server when stopping the array I have a Supermicro X8SIL-V motherboard, with 6 onboard sata. If I only use these ports, I don't see the issue. if there is more than 1 sata controller, the issue appears. I did several cycles of mount/umount of disk6, that is the disk attached to pci-e controller, in maintenence mode, as well as in normal mode (with SMB Export set to off), and none failed I think this bug is brought up because the umount of disk5, the last disk on the onboard sata controller, is not completelly finished when, in the sequence, the umount for disk6, attached to the pci-e controler is started what led me to this conclusion is that on my syslog , 2 disks remain busy after umount crashes the kernel. the conclusion is that the unmount of the second to last disk and the unmount of the last disk conflict with each other. perhaps from a race condition between them. I think if a small delay between umounts is added, this issue could be solved attached is the syslog with the kernel BUG info syslog.zip
  12. Tom, could you implement a slight delay (3 seconds for example), between each unmount when the stop button is pressed on the webgui ?
  13. I did several cycles of mount/umount disk6, that is the disk attached to pci-e controller and none failed I have a hunch: could this bug be brought up because the umount of disk5 on the onboard sata controller is not completelly finished when, in the sequence, the umount for disk6, attached to the pci-e controler is started? what if a few seconds were added between all the umount commands ? could someone create a test case, with 2 disks, one attached to a onboard sata controller, and other attached to an offboard sata controller, and a script that mounts and unmounts them in sequence ? what led me to this hunch is that on my syslog , 2 disks remain busy after umount crashes the kernel. the conclusion is that the unmount of the second to last disk and the unmount of the last disk conflict with each other. perhaps from a race condition between them. I don't have the capabilities to debug this low level kernel stuff...but perhaps someone else has
  14. can I unmount a disk while the array is started, but in normal mode ? (not in maintenance mode) I assume I must stop samba first