• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Brucey7's Achievements


Contributor (5/14)



  1. Thanks, I do have all my old files
  2. Thank you to all those that helped, especially trurl The new disk is fitted and rebuilt successfully. I'm not sure about whether the old disk is ok or not, at some point I try and preclear it and see what happens.
  3. I have an update. I reseated all the drives and rebooted the server. It saw the failed disk, I ran a parity check and it ran ok for about an hour before I went to bed, this morning the disk has been dropped again overnight sometime with 2048 disk errors, parity check hasn't yet finished. So I will shortly be in a position where the disk is being emulated and I can add the new disk when it arrives next week. I have attached the diagnostics. I'd be grateful for confirmation the disk is shot.
  4. Yes I still have it, after a "new config" retaining all disks, the server wouldn't see it.
  5. Yes, array has not been restarted. My plan was to assign the new disk only, format it, start the array with the all the disks (new disks included) after clicking "Parity is OK", shut down, reboot and rebuild parity. I have a few servers, this particular server has issues, every few months I get UDMA errors sometimes resulting in a disk dropping off the array, a new config retaining all disks corrects it (it didn't this time), I then do a connecting parity check. I've replaced disk back planes, cables, disk controllers, everything except the motherboard which is too big/expensive a job.
  7. I did not allow New Config to rebuild parity because I know it will initialise the new drive (which is on order) Parity is still ok.
  8. I have a disk failed, I have done a new config and kept all disks in the new config, now shows one disk missing but doesn’t show details of the disk. I want to replace with larger disk and rebuild from parity, what do I do?
  9. A potential solution to this might be the following action on hitting spin down button... Read vm.dirty_expire_centisecs Change vm.dirty_expire_centisecs from read value to 1 second (potentially 0 seconds) Spin Down the disks Wait 1 or 2 seconds (If any disks spun up) Spin Down the disks (or just spin them down again) Restore vm.dirty_expire_centisecs to original read value Currently, after a large write you need to wait 2 of 30 second intervals i.e. a minute before you can spin down the disks for the write cache to be flushed to disk
  10. Actually, Tips & Tweaks doesn't help. The disks still spin up again and make more writes. What's really need is for the write cache to be flushed to disk before spinning down the disks.
  11. Thanks dlandon, I have installed tips and tweaks, the options seem to be the size of the cache, not the speed it is flushed to disk. I've set the size percentages smaller and will see where that goes. I would still prefer to see the Spin Down disks button flush the cache before spinning down disks.
  12. From what it does now, to sync the file system, then spin down. After a large write, I have to wait for about a minute before I can spin down the disks as spurious writes seem to be made, if I spin it down straoght after a large copy, it spins back up again after a few moments.
  13. Further, this doesn't properly work on either server. The bug seems to be as follows, with a polling interval set at 30 seconds. Start a large number of files copying to the server, Spin up the array, Each file is copied for up to 30 seconds in read/modify/write before switching to reconstruct write Next file in the list begins in read/modify/write mode for up to 30 seconds again
  14. How do I determine what HBA's I have?
  15. I’m away on holiday for 2 weeks and have 3 unraid servers, auto turbo write does work on one, another is a backup with no drives. I’m not sure what the host controllers are.