itimpi

Moderators
  • Posts

    15793
  • Joined

  • Last visited

  • Days Won

    42

Community Answers

  1. itimpi's post in disk spun up when downloading all settings look fine was marked as the answer   
    Why do you think everything should be on cache?   With the settings you show new files should go to the cache until mover runs, but older files will have been moved to the array.
  2. itimpi's post in advice on plan to reformat drive from RFS to XFS was marked as the answer   
    Really up to you.   If you are familiar with rsync I would tend to use it as it gives you more control.
  3. itimpi's post in Array Drive Replacement w/ Parity was marked as the answer   
    No, but it is the drive you rebuild, not parity.   Replacing the drive requires rewriting every sector on that drive with its new contents and during that process Unraid will be reading from all the other data drives and the parity drive to reconstruct it’s contents.   There is no need to first copy the data elsewhere as the rebuild process puts back the contents.   It’s a good idea, though, to keep the removed disk intact until the rebuild finishes just in case anything goes wrong during the rebuild.
     
  4. itimpi's post in move: create_parent: no space left on device | files not being moved off cache drive was marked as the answer   
    Under Settings->Global User Share settings you have set restrictions on which drives can be used for User Shares and all those drives are full  I suspect you mean to allow some of the other drives be used for User Shares as well?
     
    It is normally better to let that setting be left at all drives and then use either Include or Exclude settings on individual shares if you want to restrict the drives they can use.  It is easy to forget the global setting when you add additional drives.
  5. itimpi's post in Disk not available to include/exclude in shares. was marked as the answer   
    Have you checked under Settings->Global Share settings that you have allowed disk1 to be used for User Shares?
  6. itimpi's post in SMART error problem was marked as the answer   
    Have you tried clicking on the Orange icon to see what it says is the problem, and then using the Acknowledge option on the drop-down menu so you only get notified about new changes.
  7. itimpi's post in Migrated to new cache drive; no dockers are showing up was marked as the answer   
    You need to reinstall the docker container binaries into the docker.img file (a new one should have been created when you re-enabled docket).   If you do it via Apps->Previous Apps then they will be re-installed with the same settings as last time.
  8. itimpi's post in Is encryption required when adding a drive? (Sidequestion: How to switch drive parity slot?) was marked as the answer   
    This was unecessary and maybe even counter-productive as it puts stress on the failing drive.  Rebuilding the contents of the failed drive would get back the contents anyway.
    Sounds like you have encryption set as the default under Settings->Disk Settings?    Regardless you can click on the drive on the Main tab and set its format explicitly to a non-encrypted file system which should now let you start the array.
     
    Unraid will be quite happy if you want to leave parity1 slot empty.
     
    You cannot combine removing parity2 and assigning parity1 into a single step.  Moving parity2 to the parity1 slot involves
    Stop the array and unassign parity2 Start the array to make Unraid 'forget' the parity2 assignment Stop the array and assign the old parity2 to parity1  Start the array to commit the change and build the contents of parity1 (it uses a different algorithm to parity2). Until step 4 completes your data will be unprotected against another drive failing.
  9. itimpi's post in Multi nas- Licence was marked as the answer   
    The licence is tied to the flash drive that is used to boot Unraid and therefore each server requires its own licence. 
  10. itimpi's post in Parity Device Is Disabled was marked as the answer   
    A drive is disabled when a write to it fails for some reason.  Since you rebooted before taking diagnostics then we cannot see what lead up to this, but the commonest cause is connection errors due to SATA or power cabling.  These can sometimes be transient errors so not always easy to diagnose without the diagnostics so if it happens again then make sure to capture them before rebooting.
     
    The SMART information for the parity disk looks OK, but if you want to be confident this is the case then click on the drive on the Main tab and select the option to run the extended SMART test.  This runs completely internally to the drive so is not affected by cabling issues (except perhaps a power issue).
     
    To clear the disabled state you need to rebuild the contents of the parity drive.  The process for rebuilding a drive to itself is covered here in the online documentation.
  11. itimpi's post in Data drive replaced with larger capacity but not using new space was marked as the answer   
    I think the file system resize happens at array start, so an array stop/start may fix it.  If not a reboot probably will.
  12. itimpi's post in How to recover data from physical disk if my unraid server is physically down was marked as the answer   
    Whatever disk you want to get files off!
     
    Parity has no file system and stores none of your data.
  13. itimpi's post in Replaced failing drive, server not booting now was marked as the answer   
    The sda1 device is almost certainly your flash drive so the screenshot indicates that one of the Unraid system files cannot be read from it.   This can sometimes be fixed by simply overwriting all the bz* type files (extracted from a zip file version of the Unraid release).  I would suggest you started by putting the flash drive into a PC/Mac to check you can read the flash drive and making a backup before trying the above.   If that does not work it might suggest the flash drive is failing.
  14. itimpi's post in server not booting was marked as the answer   
    It is worth pointing out that the config folder on the flash drive has your settings.   When you have a flash drive that is booting then just copy that from the backup and it should then boot OK. 
  15. itimpi's post in Woke up to 100% log and not sure how to tackle. was marked as the answer   
    There are lots of 
    May 23 21:09:34 Tower kernel: XFS (md3): Corruption detected. Unmount and run xfs_repair  
    You need to run a check filesystem on disk3
  16. itimpi's post in Two new disks Unmountable after one week of working in my array was marked as the answer   
    those commands are wrong and will result in the error of the superblock not being found.    You need to add the partition number on the end when using the ‘sd’ devices (e.g. /dev/sde1).    Using the ‘sd’ devices will also invalidate parity.  If doing it from the command line then it is better to use the /dev/md? type devices (where ? is the disk slot number) as that both maintains parity and means the partition is automatically selected.
     
    it is much better to run the command via the GUI by clicking on the drive on the Main tab and running it from there as it will automatically use the correct device name and maintain parity.
     
  17. itimpi's post in Unraid cache dies 2x really fast was marked as the answer   
    Unless you have been really unlucky with the 2 SSDs I would suspect somethings wrong at the hardware level.
  18. itimpi's post in fix common problems error report was marked as the answer   
    The host path for the plex /config folder in your screenshot shows it is pointing to the wrong location.
  19. itimpi's post in HDD read & write speeds fluctuate between zero and full speed was marked as the answer   
    You are getting errors like the following in your syslog:
    May 15 19:02:48 Tower kernel: ata7.00: status: { DRDY } May 15 19:02:48 Tower kernel: ata7: hard resetting link May 15 19:02:49 Tower kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 19:02:54 Tower kernel: ata7: found unknown device (class 0) May 15 19:02:54 Tower kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 19:02:58 Tower kernel: ata7: softreset failed (1st FIS failed) May 15 19:02:58 Tower kernel: ata7: hard resetting link May 15 19:02:59 Tower kernel: ata7: SATA link up 3.0 Gbps (SStatus 123 SControl 320) May 15 19:02:59 Tower kernel: ata7.00: configured for UDMA/133 May 15 19:02:59 Tower kernel: ata7: EH complete  
    This suggests connection issues which are typically cabling (sata or power) related.
  20. itimpi's post in Moving a large ammount of data was marked as the answer   
    That process will work if the new server does NOT have a parity drive.   If you do have a parity drive Unraid will try and clear it while adding it to the array thus wiping the data.
     
    If you do have parity on the new server then you have the option of using the New Config Tool on the new server to add the drive (which will preserve its data) and recalculate parity.   The data would be unprotected until parity completed its rebuild.   If you used this technique then you would probably want to do multiple drives at once to minimise the number of times parity would need rebuilding.
  21. itimpi's post in Disk utilization high for larger disk after disk upgrade was marked as the answer   
    Your disk utilisation is about what I would expect with the default High Water Allocation method which is discussed in some detail here in the online documentation.  I suspect you do not quite understand how it works?
     
    The High Water points are based on the size of the largest drive (not a % used)  so in your system this would be 233GB.  At this point disk1 and then disk2 would be used until this point is reached
    The next point is 116.5Gb and then disks would be used in turn until this point is reached.  It appears that you probably reached this point.
    The next point would be 58.25 GB and then all drives would be used in turn until they were down to this amount free.
    etc.
     
    I would recommend switching back to High water as the Most Free option is the least efficient as it keeps switching drives thus keeping them spun up but that is up to you if you prefer to keep all drives with roughly the same amount of free space.
  22. itimpi's post in appdata in array was marked as the answer   
    It depends   Appdata is where the working files for docker normally live but you can configure containers to put them elsewhere.   If you configure a docker to place files there as they download then obviously it grows in size to accommodate those files.
  23. itimpi's post in Cache Problems After Move and Long Downtime was marked as the answer   
    It looks like your docker image file is corrupt and your cache drive has dropped offline.    You might have to run a power cycle to get to back online.
     
    That is actually a good time for a drive that size.
     
    You might want to install the Parity Check Tuning plugin to run the check in increments outside prime time.
     
  24. itimpi's post in something tells me I have a very unhappy system was marked as the answer   
    That will fail because you have the wrong device name   if using the /dev/sd? Devices you have to add a 1 on the end to specify the partition.   Using those device names will also invalidate parity so is not recommended.
     
    if instead do this from the GUI while the array is in Maintenance mode by clicking on the drive on the Main tab to access the check/repair functionality it will use the correct device name for you and will also maintain parity.
  25. itimpi's post in ZFS Pool Without unRaid Array? was marked as the answer   
    It is a requirement at the 6.12 release that you have at least one drive in the array.   However if you intend for all your data to be on a ZFS pool then you can use a small flash drive (that will not be used to store any data) to satisfy this requirement.   That requirement may disappear in a future release with any luck.