Jump to content

Brucey7

Members
  • Posts

    304
  • Joined

  • Last visited

Everything posted by Brucey7

  1. A potential solution to this might be the following action on hitting spin down button... Read vm.dirty_expire_centisecs Change vm.dirty_expire_centisecs from read value to 1 second (potentially 0 seconds) Spin Down the disks Wait 1 or 2 seconds (If any disks spun up) Spin Down the disks (or just spin them down again) Restore vm.dirty_expire_centisecs to original read value Currently, after a large write you need to wait 2 of 30 second intervals i.e. a minute before you can spin down the disks for the write cache to be flushed to disk
  2. Actually, Tips & Tweaks doesn't help. The disks still spin up again and make more writes. What's really need is for the write cache to be flushed to disk before spinning down the disks.
  3. Thanks dlandon, I have installed tips and tweaks, the options seem to be the size of the cache, not the speed it is flushed to disk. I've set the size percentages smaller and will see where that goes. I would still prefer to see the Spin Down disks button flush the cache before spinning down disks.
  4. From what it does now, to sync the file system, then spin down. After a large write, I have to wait for about a minute before I can spin down the disks as spurious writes seem to be made, if I spin it down straoght after a large copy, it spins back up again after a few moments.
  5. Further, this doesn't properly work on either server. The bug seems to be as follows, with a polling interval set at 30 seconds. Start a large number of files copying to the server, Spin up the array, Each file is copied for up to 30 seconds in read/modify/write before switching to reconstruct write Next file in the list begins in read/modify/write mode for up to 30 seconds again
  6. How do I determine what HBA's I have?
  7. I’m away on holiday for 2 weeks and have 3 unraid servers, auto turbo write does work on one, another is a backup with no drives. I’m not sure what the host controllers are.
  8. I’d thought of that, I would love to store the metadata on an SSD and make the catalogue a lot quicker, but there is no such option in Ember.
  9. One of my servers is dedicated to media (movies) with all of them catalogued with Ember Media Manager and only one share on my server with 20 data drives allocated to the share. If I use spin up groups, all the drives will stay spun up whilst I watch a 2 hour long movie. If I don’t use spin up groups, one drive at a time will spin up as Ember Media Manager retrieves the NFO file and JPG files to display in the catalogue. It can take 10 minutes or more as I scroll from movie to movie for the first 20 movies or so in the Ember Catalogue with long delays when the files are on spun down disks. A great feature would be all drives in the share accessed to be spun up on accessing (say) the NFO or JPG file and then whilst I’m actually watching the movie, typically an MKV file, the other drives would be spun down on the preset spin down delay. Currently I work round this by using spin up groups and copying the movie I want to watch to my HTPC, but it’s a pain with some 4K movies being 70GB or larger.
  10. They are all Seagate Archive drives ST8000AS0002 (20 of) and the 2 parity drives are Hitachi HGST_HDN728080ALE604
  11. I have the same error, it doesn't realise the disks are spun up
  12. A suggestion for an enhancement. Switch to reconstruct-write mode if writing continuously for X minutes.
  13. I regularly do a single write of 100 -120 GB of data at a time, no speed issues at all, with "reconstruct write" on I nearly always max out the Gigabit connection, it occasionally drops from 113MB/s to 90MB/s. I have 11 of these shingled drives in the server, 2 of them are parity.
  14. Including 2 parity drives, I have 9 of these in one of my servers and the rest are 6TB's, I'm gradually replacing the 6TB's with these Shingled drives, they are a solid performer. There's just one thing you shouldn't do with them, and that's two simultaneous writes, e.g. parity rebuild AND write new data, other than that, I love them. I should add, some of my media files are over 100GB each and no issues.
  15. I've just started toggling Read-Only/Read-Write myself, it's just a bit tedious. I wasn't in the least bit worried about ransomware (until someone mentioned it, ignorance was bliss).
  16. Another relatively easy way to provide resistance is to have another field on the "Global Share Settings" page (or per share) which says "Lock File System Read-Only Y/N" Then if locked, have another option appear with a drop down menu "Temporarily Unlock File System for.......1/2/4/8/16/32/64/128 minutes" If you use mover, it could unlock it temporarily
  17. Thanks guys, that is what I was looking for
  18. The question is.. When I insert a disk that unRAID thinks has failed, is there some way I can tell it to forget that it was used before, so it offers it as a new disk, instead of (incorrectly) saying it is faulty, and thus rebuild the data onto it?
  19. I have 2 parity disks so I have already rebuilt one disk with a spare and the other is nearly finished with another spare - but there is nothing wrong with the 2 disks that e server won't let me reuse until this process is complete (I have no data loss). I try to avoid doing NewConfig wherever possible.
  20. For example, to make it think the disk is new and attempt to rebuild it? I had some cabling issues (now fixed) that resulted in 2 drives being marked as "Faulty" drive disabled. unRAID will no longer allow me to use them in that server, but they are fine and just completing preclear in my second server.
  21. Having one unRAID server act as a master and slave other servers user shares might be useful. So when one server is at capacity, you can slave a second/third etc with concatenation of user shares performed by the master.
  22. Check when you fit 5x3's, there is often a right way and a wrong way. Whilst they will work upside down, on some of them the LED's are illuminated at an angle and are only properly visible on floor standing towers when mounted one way round so the LED's are angled upwards.
  23. I have a few Thecus N7700 NAS boxes that run ZFS flawlessly, and have done for 5 years. They only have 1GB of RAM in them, admittedly the array size is 14TB only.
  24. The command you need is rsync --progress -avh /mnt/disk15/ /mnt/disk5 This will not give you a top level directory of disk15 with your data under it. Unfortunately, the rsync documentation is wrong on some sites, I have found this out by trial and error. I should add, this will leave your original disk untouched, i.e. not remove the files.
×
×
  • Create New...