Jump to content

itimpi

Moderators
  • Posts

    19,886
  • Joined

  • Last visited

  • Days Won

    54

Community Answers

  1. itimpi's post in server not booting was marked as the answer   
    It is worth pointing out that the config folder on the flash drive has your settings.   When you have a flash drive that is booting then just copy that from the backup and it should then boot OK. 
  2. itimpi's post in Woke up to 100% log and not sure how to tackle. was marked as the answer   
    There are lots of 
    May 23 21:09:34 Tower kernel: XFS (md3): Corruption detected. Unmount and run xfs_repair  
    You need to run a check filesystem on disk3
  3. itimpi's post in Two new disks Unmountable after one week of working in my array was marked as the answer   
    those commands are wrong and will result in the error of the superblock not being found.    You need to add the partition number on the end when using the ‘sd’ devices (e.g. /dev/sde1).    Using the ‘sd’ devices will also invalidate parity.  If doing it from the command line then it is better to use the /dev/md? type devices (where ? is the disk slot number) as that both maintains parity and means the partition is automatically selected.
     
    it is much better to run the command via the GUI by clicking on the drive on the Main tab and running it from there as it will automatically use the correct device name and maintain parity.
     
  4. itimpi's post in Unraid cache dies 2x really fast was marked as the answer   
    Unless you have been really unlucky with the 2 SSDs I would suspect somethings wrong at the hardware level.
  5. itimpi's post in fix common problems error report was marked as the answer   
    The host path for the plex /config folder in your screenshot shows it is pointing to the wrong location.
  6. itimpi's post in HDD read & write speeds fluctuate between zero and full speed was marked as the answer   
    You are getting errors like the following in your syslog:
    May 15 19:02:48 Tower kernel: ata7.00: status: { DRDY } May 15 19:02:48 Tower kernel: ata7: hard resetting link May 15 19:02:49 Tower kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 19:02:54 Tower kernel: ata7: found unknown device (class 0) May 15 19:02:54 Tower kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 19:02:58 Tower kernel: ata7: softreset failed (1st FIS failed) May 15 19:02:58 Tower kernel: ata7: hard resetting link May 15 19:02:59 Tower kernel: ata7: SATA link up 3.0 Gbps (SStatus 123 SControl 320) May 15 19:02:59 Tower kernel: ata7.00: configured for UDMA/133 May 15 19:02:59 Tower kernel: ata7: EH complete  
    This suggests connection issues which are typically cabling (sata or power) related.
  7. itimpi's post in Moving a large ammount of data was marked as the answer   
    That process will work if the new server does NOT have a parity drive.   If you do have a parity drive Unraid will try and clear it while adding it to the array thus wiping the data.
     
    If you do have parity on the new server then you have the option of using the New Config Tool on the new server to add the drive (which will preserve its data) and recalculate parity.   The data would be unprotected until parity completed its rebuild.   If you used this technique then you would probably want to do multiple drives at once to minimise the number of times parity would need rebuilding.
  8. itimpi's post in Disk utilization high for larger disk after disk upgrade was marked as the answer   
    Your disk utilisation is about what I would expect with the default High Water Allocation method which is discussed in some detail here in the online documentation.  I suspect you do not quite understand how it works?
     
    The High Water points are based on the size of the largest drive (not a % used)  so in your system this would be 233GB.  At this point disk1 and then disk2 would be used until this point is reached
    The next point is 116.5Gb and then disks would be used in turn until this point is reached.  It appears that you probably reached this point.
    The next point would be 58.25 GB and then all drives would be used in turn until they were down to this amount free.
    etc.
     
    I would recommend switching back to High water as the Most Free option is the least efficient as it keeps switching drives thus keeping them spun up but that is up to you if you prefer to keep all drives with roughly the same amount of free space.
  9. itimpi's post in appdata in array was marked as the answer   
    It depends   Appdata is where the working files for docker normally live but you can configure containers to put them elsewhere.   If you configure a docker to place files there as they download then obviously it grows in size to accommodate those files.
  10. itimpi's post in Cache Problems After Move and Long Downtime was marked as the answer   
    It looks like your docker image file is corrupt and your cache drive has dropped offline.    You might have to run a power cycle to get to back online.
     
    That is actually a good time for a drive that size.
     
    You might want to install the Parity Check Tuning plugin to run the check in increments outside prime time.
     
  11. itimpi's post in something tells me I have a very unhappy system was marked as the answer   
    That will fail because you have the wrong device name   if using the /dev/sd? Devices you have to add a 1 on the end to specify the partition.   Using those device names will also invalidate parity so is not recommended.
     
    if instead do this from the GUI while the array is in Maintenance mode by clicking on the drive on the Main tab to access the check/repair functionality it will use the correct device name for you and will also maintain parity.
  12. itimpi's post in ZFS Pool Without unRaid Array? was marked as the answer   
    It is a requirement at the 6.12 release that you have at least one drive in the array.   However if you intend for all your data to be on a ZFS pool then you can use a small flash drive (that will not be used to store any data) to satisfy this requirement.   That requirement may disappear in a future release with any luck.
     
  13. itimpi's post in emhttpd: error: hotplug_devices was marked as the answer   
    Looking at the syslog as soone as you start the rebuild you get 
    May 8 21:08:23 MegaAtlantis kernel: sd 7:0:0:0: [sdb] tag#1201 Sense Key : 0x2 [current] May 8 21:08:23 MegaAtlantis kernel: sd 7:0:0:0: [sdb] tag#1201 ASC=0x4 ASCQ=0x0 May 8 21:08:23 MegaAtlantis kernel: sd 7:0:0:0: [sdb] tag#1201 CDB: opcode=0x8a 8a 00 00 00 00 00 01 9b 04 58 00 00 04 00 00 00 May 8 21:08:23 MegaAtlantis kernel: I/O error, dev sdb, sector 26936408 op 0x1:(WRITE) flags 0x0 phys_seg 128 prio class 0 May 8 21:08:23 MegaAtlantis kernel: md: disk1 write error, sector=26936344 May 8 21:08:23 MegaAtlantis kernel: md: disk1 write error, sector=26936352 Followed by a lot more write errors.
     
    It may really be a failing disk, so I suggest you run the click on the drive on the Main tab; disable spindown; and run the extended SMART test.   If that fails then you really have a failing drive. 
     
    BTW:  it was not relevant in this case, but often we will want the diagnostics with the array started in normal mode so we can see if emulated disks are mounting.
  14. itimpi's post in New box, wait for zfs release or use parity array? was marked as the answer   
    Yes - what you mention is the way to go if you want a ZFS array.    Just make sure any shares are set to only be on the ZFS array (or the SSD one for appdata/dockers).  
     
    At some point in the future (maybe in the 6.13 release) the requirement to have that dummy flash drive in the array will be removed as the existing array type just becomes another pool type you can use.
  15. itimpi's post in Dynamix plug-in error after rebuild [Solved] was marked as the answer   
    You might want to open this file on the flash drive - it should be a simple text file with 1 line for each entry in the parity history.   Sounds as if it might have gotten corrupted.
     
    I think if it is deleted it gets recreated next time any check gets run.
  16. itimpi's post in Cache - Unmountable: No pool uuid was marked as the answer   
    If you now add the second drive to the pool the file system will automatically be set to be btrfs., but will not initially be formatted.   You then need to format it from within Unraid (which does both drives).   
     
    Note that with the soon to be released 6.12 release you will also have the option of using zfs in multi-drive pools as an alternative to btrfs so if you are interested in going that route you can install 6.12-rc5 (or wait until it goes stable which is expected to be within days) and then create the multi-drive pool selecting zfs as the file system to be used.
     
    It depends where you copied the data to.  Copying it manually will work and is probably fastest, but if it was copied to a share of the correct name on the main array then setting the relevant shares to Use Cache-Prefer and running mover (with docker and VM services disabled) then mover can be used to copy the files back.
  17. itimpi's post in Community Plugin installation Generic Error was marked as the answer   
    The syslog in those diagnostics show problems with the flash drive.   I would suggest you put it in a PC/Mac to check it can be read correctly.    It may be failing, so do you have a. backup taken since you last made any configuration changes?
  18. itimpi's post in Ethernet speed issue. 10GB NIC is not recognised properly was marked as the answer   
    It is not clear that ZFS in the array offers significant benefits over using BTRFS in the array as performance is still capped by the way the array maintains parity, and there have been reports that ZFS is slower.  
     
    The big benefit of ZFS is going to be when it is used in a pool as there it can give significant performance benefits.   Long term the current Unraid array is going to become just another pool type that you can use when appropriate for a particular use case.
  19. itimpi's post in QNAP ts-1635ax BIOS was marked as the answer   
    That QNAP device will not be capable of running Unraid as it has an ARM based processor, while Unraid requires a x86 based one.
  20. itimpi's post in Suggestion on storage setup was marked as the answer   
    Should be fine.
     
    You might find this item on Unraid 6.12-rc4 to be of interest in understanding about Unraid storage management.
  21. itimpi's post in Is this a sign of a bad flash drive? was marked as the answer   
    It is normal for this path to exist in addition to /mnt/user.  The user0 path is the contents of the share ignoring any files that are on a pool whereas the user one does include any pool files.   /mnt/user disappearing is typically caused by the shfs process that supports user shares crashing.
     
  22. itimpi's post in Parity Check Completing but some shares unprotected was marked as the answer   
    Parity does not backup your shares - it merely provides the mechanism that means if an array drive fails then its contents can be reconstructed.   You still need backups of any important/critical data.
     
    in terms of shares showing some files are unprotected that is normal if the share in question has files on a pool (cache) and that pool is not redundant.
  23. itimpi's post in Cache drive/Docker containers missing was marked as the answer   
    The cache drive is not showing up, but since you rebooted before taking diagnostics we cannot see what lead up to that.
     
    Your best chance at this point is to power-cycle the server to see if the cache drive becomes visible again.  If not then it has probably failed.
  24. itimpi's post in Disk rebuild elapsed time seems to be off. was marked as the answer   
    The parity check tuning plugin will get the correct times in the final report and the history record when the operation completes.   I believe the problem is that the standard Unraid reporting does not take into account properly the fact that an array operation is running in increments.
  25. itimpi's post in Cannot Write to Disk was marked as the answer   
    Have you tried doing a file system check on disk?
×
×
  • Create New...