Jump to content

itimpi

Moderators
  • Posts

    20,699
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. It is normally Marvell controllers that tend to cause problems looks like multiple disks dropped offline at the same time. You might want to check whatever is common (such as cabling).
  2. According to your screen shot Cache is showing as sdd. The other on is showing as sdc
  3. True, But unRaid does not actually create the folder for any particular disk until it puts the first file for that share on the disk. In the particular use case OP describes probably need to create it manually on the disks you are copying to by-passing user share support.
  4. Yes. Is is likely that you would not have ended up with as much in tle lost+found folder if the correct device had been used for the first xfs_repair (not that that is much consolation at this point ) Sorting out the lost+found folder can be a lot of work unfortunately when file names are lost. You can use the Linux 'file' command to at least get the file type of files with cryptic names. Yes. I would not expect there to be any parity errors at this point.
  5. You are likely to get better informed feedback if you attach your systems diagnostics zip file (obtained via Tools->Diagnostics) to your NEXT post so we can see more about your configuration
  6. Worth noting that you need to be in the Advanced view to see that setting.
  7. Whichever one shows as device sdd in the GUI
  8. Now restart the array in normal mode to look at the drive contents.
  9. I would now rerun it removing the -n (no modify) flag to see if that has helped. You might want to also check if a lost+found folder has been created on the drive from files whose name could not be resolved.
  10. I am afraid I have no idea if what you did was OK if you omitted the partition number and whether it would have damaged the file system. Whenever I made that mistake it failed as it could not find the superblock Unless there is serious corruption a xfs_repair is very fast (seconds/minutes) so if your laptop went to sleep this suggests something else was happening. @JorgeB might have a suggestion on best action to take at this point.
  11. No reason I could see then why any files should have gone missing unless they were actively deleted somehow.
  12. Using /dev/sdi would be wrong - it should be /dev/sdi1 (partition number required for /dev/sdX type devices), and doing it that way does not update parity as corrections are made. Not sure what the effect of leaving of the ‘1’ is - I would have thought it might fail. Using the sdX type device would certainly mean you should expect errors when doing the parity check. You could instead have used a device of the form ‘/dev/mdX’ where X was the disk number as that does not need the partition specified and maintains parity. You can run a check/repair from the GUI as described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI. if you ran a correcting check then you should expect the next one to report 0 errors.
  13. No - there is an option to run the check from the GUI by clicking on the drive on the Main tab. For XFS drives you need the array running in Maintenance mode. this is covered here in the online documentation accessible via the Manual link at the bottom of the GuI.
  14. The fact that the drive is empty should be irrelevant to a parity check/rebuild - it simply processes every sector on the drive in turn regardless of its contents.
  15. Did you run the zfs_repair from the GUI or the command line? If the command line exactly what device name did you use? Was the parity check you ran correcting or Non-correcting? If correcting then the next one should show 0 errors. If it was non-correcting then you need to run a correcting check to get rid of the errors - and be aware that the correcting check will then show the same number of errors as unRaid misleadingly reports each correction as if it were an error in the summary (but syslog shows them being corrected).
  16. Nothing obviously wrong in the diagnostics, and they do show that you have almost certainly assigned the correct disk to parity. It might be worth running a file system check on each of the data disks as a check that there is no corruption at that level. a few minor anomalies I noticed in share settings: w——w has files on cache with Use Cache=No. The No setting prevents files moving from cache to array system has files on disk1 with Use Cache=prefer. Might want to check out what/why files have not been moved to cache appdata has files on array disks with Use Cache=Only. The Only setting prevents files moving from array to cache.
  17. If you can get to the command line use the ‘diagnostics’ command.
  18. Nothing obvious from what you did that should cause problems. You should provide your system diagnostics zip file (Tools -> Diagnostics) to get any sort of informed feedback
  19. It has always been specified this way. At one point the temperature option was not working properly so it may well have been that when you had that setting and the setting was simply having no effect. If you have specified Daily (which is the default) for the frequency then you specify time in hours + minutes. Originally this was the only option. If you specify Custom as the frequency then you can use cron tab format which gives you more control at the expense of not being as convenient to use. This was added some time ago now and is very useful to me when testing as it allows for options that are not simply daily.
  20. Use the ‘Add Pool’ button. With unRaid 6.9.0 supporting multiple pools (any of which can be used for caching purposes) the name is no longer fixed as ‘cache’ and can be whatever the user wants.
  21. As long as you have valid parity there is no problem replacing a drive in unRaid with a larger one. the approach is different in that you do not first copy the data elsewhere. when you remove the drive to be replaced unRaid will start emulating the missing drive using the combination of the other drives plus parity. When you plug in the replacement (larger) drive unRaid will ‘rebuild’ it to have the same contents as were on the ‘emulated’ drive. Once the rebuild has completed (thus restoring all the original contents) if the replacement drive is larger then the file system is expanded to fill the whole drive.
  22. @Marino Found out what looks like the cause of your temperature problems. I think the plugin is working correctly but you have misunderstood the way the temperature values are used for the temperature related pause/resume From the log I think that you have entered actual temperatures rather than the amount away from the warning threshold set for the drive in the drive's settings? The reason you do not get an immediate pause is that the task that looks for over-heating drives only runs at regular intervals (currently set to be every 7 minutes). As an example if the warning threshold on a particular drive is 50C then entering values of Pause=2 means pause at 48C (50-2) Resume=7 means resume at 43C (50-7) Unraid provides a global value for the warning threshold under Settings->Disk Settings but allows you to override the global value at the individual drive level by clicking on it on the Main tab. The plugin works this way as different drives can have different values set at the unRaid level so using relative values means each drive can potentially have different pause/resume temperatures. Can you please confirm that my analysis is correct? If it is I will enhance the built-in help with a worked example of the type given above. I will also add some sort of upper limit to the values that can be entered to try and pick this type of misunderstanding up from the outset on the plugin's settings page.
  23. I think this is wrong and mover DOES move to the share and not directly to a drive. I believe this is what the /mnt/user0 mount point is used for (it is the User Share including only the array disks ignoring anything on the cache).
  24. That is the correct file - sorry about giving you the wrong name initially. Early on the speed is just a rough estimate - it gets more accurate as you go further. You are correct - you have to do a refresh to get it to show correctly. I have asked if there is any way to force a refresh from within the plugin but had no reply that gives a way to do this. It will be a coincidence as there is nowhere in the plugin that is monitoring the percentage - it is just used for display purposes. I'll give more feedback when I have had a chance to look at that log
  25. UnRaid is not going to ‘handle’ that and it is later going to cause you problems if you leave it on that SSD A vdisk image is created as a Linux ‘sparse’ file which means that physical space used is only that used by writes to the file but it can grow up to the 2TB you created it as when the VM continues to write to it. The VM will stop working correctly once the free space on the SSD is exhausted. To avoid such issues the vdisk file needs to be moved to a disk which does have 2TB free.
×
×
  • Create New...