Brucey7

Members
  • Posts

    304
  • Joined

  • Last visited

Posts posted by Brucey7

  1. Thanks, I did plug it into a PC, it appeared to be ok but it dropped off the pC as soon as I tried to copy config folder.

     

    I have a new USB drive and copied my Key onto a fresh install but when I try to replace key and confirm ok to blacklist old USB , I get request failed with status code 403 error key file not valid

  2. I do have a backup of the flash drive, it's 3 years old, but I have a disk map in case any were replaced in that time.

     

    I put the flash in my PC and it scanned and corrected the errors, I can't see a boot directory, should there be one?

  3. After upgrading from 6.12.8 to 6.12.10 but before rebooting I got a message saying flash corrupted or offline,

     

    Directory of flash shows no files although size is 30.0GB unused and 1.37GB used.

     

    I can't do the diagnostics as it fails to save them.

     

    What do I do?

  4. The split level is set so that no directories are on more than one disk. 
     

    The directories already exist on these disks, so unraid should only add them into the existing directories.

     

    It works perfectly on 15 disks but not 2 of the disks.  It thinks the disks are full, which they are not.

     

    The use case is new episodes of an existing TV series.

  5. I am not at home, I will check in a couple of days, but I think the settings are the same on all 17 disks, only 2 behave differently.

     

    FWIW, copying to the share where these 2 disks were involved, used to hit an error not enough space, but if you hit try again a few times, would eventually complete successfully.

  6. Multiple XFS drives in the array

    all drives circa 1TB free space

    Only one share on the server containing all drives

     

    When copying different files to the share (existing folders on two particular drives) copy cannot complete because it reports insufficient free space, but copy to the correct folder on a separately mapped drive completes successfully and other drives have no problem..

     

    completed parity check and xfs correcting error check completed to no avail, problem persists.

  7. Thank you to all those that helped, especially trurl

     

    The new disk is fitted and rebuilt successfully.

     

    I'm not sure about whether the old disk is ok or not, at some point I try and preclear it and see what happens.

  8. I have an update.

     

    I reseated all the drives and rebooted the server. 

     

    It saw the failed disk, I ran a parity check and it ran ok for about an hour before I went to bed, this morning the disk has been dropped again overnight sometime with 2048 disk errors, parity check hasn't yet finished.

     

    So I will shortly be in a position where the disk is being emulated and I can add the new disk when it arrives next week.

     

    I have attached the diagnostics.  I'd be grateful for confirmation the disk is shot.

    tower2-diagnostics-20210821-0646.zip

  9. Yes, array has not been restarted.

     

    My plan was to assign the new disk only, format it, start the array with the all the disks (new disks included) after clicking "Parity is OK", shut down, reboot and rebuild parity.

     

    I have a few servers, this particular server has issues, every few months I get UDMA errors sometimes resulting in a disk dropping off the array, a new config retaining all disks corrects it (it didn't this time), I then do a connecting parity check.  I've replaced disk back planes, cables, disk controllers, everything except the motherboard which is too big/expensive a job.

  10. I have a disk failed, I have done a new config and kept all disks in the new config, now shows one disk missing but doesn’t show details of the disk.

     

    I want to replace with larger disk and rebuild from parity, what do I do?

  11. A potential solution to this might be the following action on hitting spin down button...

     

    Read vm.dirty_expire_centisecs

    Change vm.dirty_expire_centisecs from read value to 1 second (potentially 0 seconds)

    Spin Down the disks

    Wait 1 or 2 seconds

    (If any disks spun up) Spin Down the disks (or just spin them down again)

    Restore vm.dirty_expire_centisecs to original read value

     

    Currently, after a large write you need to wait 2 of 30 second intervals i.e. a minute before you can spin down the disks for the write cache to be flushed to disk

     

  12. Thanks dlandon, I have installed tips and tweaks, the options seem to be the size of the cache, not the speed it is flushed to disk.  I've set the size percentages smaller and will see where that goes.  

     

    I would still prefer to see the Spin Down disks button flush the cache before spinning down disks.

  13. From what it does now, to sync the file system, then spin down.

     

    After a large write, I have to wait for about a minute before I can spin down the disks as spurious writes seem to be made, if I spin it down straoght after a large copy, it spins back up again after a few moments.