Jump to content

itimpi

Moderators
  • Posts

    20,775
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Nothing really wrong with that per se, but doing it all on Unraid is probably more efficient and removes a reliance on another machine.
  2. This typically means that at least one of your docker containers is writing internally to the docker image file when ideally the location it is using should be mapped to external storage on the Unraid host. You really want the docker image file to only contain static binaries if at all possible.
  3. Is /media the server? If so are you sure you have not exported disk14 under the settings for that drive? If you want anything other than guesses you should post your system's diagnostics zip file.
  4. If you do not want to consider a backup server then you might want to look at the economics of how using hard disks plugged in temporarily to receive backups compares to the price of using tapes.
  5. There is no SMART information for disk16 which suggests it dropped offline, and also I/O errors in the syslog reporting possible xfs level corruption. I suggest you power cycle the server, and then as long as disk16 shows up run a file system check on it. Might also want to consider an extended SMART test although that would take around a day on a disk of that size. if anything looks slightly wrong post new diagnostics after the check.
  6. You cannot be certain with XFS file systems unless you have checksums of from when the data is known to be good. This is why some people use the File Integrity plugin in conjunction with XFS file systems. You also have the option of using BTRFS (and shortly ZFS) file systems on array drives which have built in checksum checking of files as they are read or written which although it does not correct such errors lets you know immediately they are detected and in which files.
  7. A few comments: Your plan should work, but it is possible it may not be necessary if the disk is not really going bad. One thing you could do now is run an Extended SMART test on the drive and see if that passes and if it does the drive is quite likely to be OK. A few sync errors after an unclean shutdown is expected. The pending sectors on disk3 is not a good sign, but sometimes these can go back to 0 if they are rewritten. Ideally CRC errors rarely in themselves indicate a problem with the actual drive as the relate to the connection between the drive and host. the Preclear on the replacement disk is not strictly necessary unless you want it as a stress test. However a rebuil onto that drive would also act as a good stress test and would be quicker (assuming it works without error). Since you already have a drive on order maybe the best thing to do is to proceed with your plan, and keep disk3 with its contents intact until the rebuild onto the replacement disk3 completes successfully. After that you can then run a pre-clear on the old drive to see if it passes that OK.
  8. Looking at the spec of that device it appears that the 32GB present will be eMMC and not NVME. Apparently the slot is meant to support both. Not sure if Unraid is expected to see an eMMC device, and even if it was you will not know if a NVME device would work without trying one out. Having said that maybe somebody who has actual experience of that devices will be able to provide a more informed answer.
  9. That only holds small files related to VMs and their configuration. I would think in practice 100MB is more than enough so you should never outgrow the default 1GB unless you have hundreds of VMs.
  10. This should not happen if the share is set to Use Cache=Only - are you sure? If so then this is a bug that will need fixing.
  11. Probably want this part of the online documentation accessible via the Manual link at the bottom of the Unraid GUI, and also every forum page has a DOCS link at the top and a the Documentation link at the bottom.
  12. You do not mention what the permissions get changed to which might help identify the likely cause. I would think that one of the docker containers might be the culprit. You say you need to run Krusader to reset the permissions? Do you mean the New Permissions tool provided with Unraid for this purpose does not do the job (I have though that calling it Reset permissions would be a better name for it).
  13. Note I am not talking about parity check, but about rebuilding the data drives. All other ways I could think of involved LOTS of copying of data around and would be both risky and time consuming and probably even more demanding on the drives. In practise a rebuild should not be that hard on the drives as it just involves reading (or writing) the sectors on each drive serially from start to end with minimal head movement.
  14. The safest way to proceed would be to replace the array 2 at a time by rebuilding them (you can do 2 simultaneously with dual parity) as that way you always have the disk to be removed intact until the rebuild finishes and you are protected against other drives failing as you can revert to the previous position if needed. Not sure there is any alternative approach that would be faster.
  15. You can normally use the manufacturers software. There are also plenty of third party utilities such as DiskCheckup
  16. You do not mention if you want to keep the data on the existing drives? Note that you do not need to Preclear the drives unless you want to do this as a confidence test on the drives. You could always run the extended SMART test on another machine as an alternative to a Preclear if you do want to test them.
  17. Have you read the the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.? The section covering Shares is here. If there is something not answered there perhaps you can clarify exactly what it is that is still not clear.
  18. By the fact that the repeating message stops appearing in the syslog every few minutes (there is a button at the top right of the GUI to display the syslog). The syslog is in RAM and gets cleared when you reboot.
  19. This seems to be quite normal for that particular tab.
  20. looks like you may have a docker container continually crashing and trying to restart. You should be able to work out which one by examining their logs and/or stopping them until the messages stop appearing in your log every few minutes.
  21. To clear the disabled state you need to rebuild the parity drive as described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  22. There tend to be 2 stages. The first is to increase the size of the vdisk. The second is from within a VM to get it to rescan the drive so that it relives that the drive size has changed.
  23. Earlier in the day I had it mounted by UD. It is showing in UD as dev1 and the sdX designation is sdb. it does not show up as mounted when I use the ‘df’ command. My system is set to power down overnight so I expect it to be rectified in the morning. Just thought it was worth asking in case there was some specific action I should take? djw-unraid-diagnostics-20230409-1719.zip
  24. I have a strange situation where I plug in a drive and after a short delay it shows, with the UNMOUNT button active. However pressing that button seems to think for a moment and then returns to displaying that button. The syslog says that the unmount has failed because the drive is not mounted! Any idea what can cause that and can it be recovered from without rebooting the server?
×
×
  • Create New...