Jump to content

itimpi

Moderators
  • Posts

    20,779
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Just a warning - RAID controllers tend to not work well with Unraid as they get in the way of Unraid managing the drives.
  2. If you mean a ‘root’ share (which as mentioned is not a standard feature of Unraid), then you should now achieve this by using the functionality for doing this built into the Unassigned Devices plugin.
  3. Not quite true - ZFS can be used in the array but in that case each drive is single drive self-contained ZFS system.
  4. I have not managed to reproduce this error unfortunately. If the syslog entries are still happening is there any chance you could do the following? Enable the Testing log mode in the plugin settings Go to Tools->webGui->PHP settings and set the Error Reporting Level to All Categories Wait until you get another occurrence of the log entry - since you say they were every 6-7 minutes this should not take long Send me a copy of your system's diagnostics (PM will do if you do not want to post them here) and any entries in the log shown by View Log on the PHP Settings page that refer to the Parity.Check.Tuning plugin. Having done this you can set the settings you changed in 1) and 2) back to their default values to avoid getting excessive logging taking place. Hopefully this would give me additional information that would allow me to pin down exactly why those log errors are occurring on your system.
  5. Assuming you mean the main Unraid array (rather than an array set up as a pool) then I am not sure we have enough experience to be sure at the moment, but I would lean towards using ZFS now that 6.12 stable is available.
  6. output looks good, so If you restart the array in Normal mode the drive should now mount without any issues.
  7. I notice that the settings for disk critical and warning temperatures appear at in the top section if you go to Settings->Disk Settings, but are in the SMART section if you click on a drive on the Main tab to get to the settings for just that drive. Personally I think the top section is the right place, but at the very least the two places you can set the settings should be in the same area for consistency.
  8. The Docker % is nothing to do with RAM. It is the % of the space allocated to the docker image file that is currently being used (which has a default size of 20GB). Similar comments apply to the flash and log percentages.
  9. You are assuming that they are getting it from the same source.
  10. That suggests something is going wrong in the periodic monitor task the plugin runs. I will have to see if I can reproduce. Can you please confirm what version of both Unraid and the plugin you are using to help with seeing if I can reproduce. I will think about whether if there is any easy way to get the behaviour you describe. The plugin is as far as possible stateless so knowing how one got to the state at any particular instance in time is not always easy to achieve.
  11. I do not think you can do what you want in one pool. All vdevs in a given pool have to have the same number of drives
  12. As long as after download the files get moved to a share that is set to Yes then I would leave the setting for the downloads share as Prefer for performance reasons. The only other thing to watch out for is to make sure the Minimum Free Space setting for the pool is set to something sensible to avoid ever accidentally over-filling the pool as if it gets completely full this can cause problems.
  13. Tools->New Config. Described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  14. You do not show the mappings to host locations in Krusader. A screenshot of those might help? Note that you could use the Dynamix File Manager plugin as an alternative to using Krusader as that avoids such issues.
  15. I would suggest a better solution is: copy across the config/plugins/dockerMan/templates-user/*.xml files for the moved containers to the flash on the new server. This means you can now restore these containers via Apps->Previous Apps with previous settings copy across the appdata container folders for the containers to be moved to the appdata share on the new server. use Apps->Previous Apps to re-install the containers. You can check for each one that the settings are still correct for the new server.
  16. That is expected behaviour as the moment the plugin detects a Manual Check has been started and you have set the option to run manual checks in increments it will automatically pause the check if it is outside the increment period (ready to restart it when the next increment starts). the only way you would achieve what you want is not enable the option to pause/resume manual checks until you are ready for the plugin to start doing this.
  17. Have you formatted the drives (the option near the array Start button). Until the drives have been formatted while in the array it is normal for them to be unmountable as at that point they have no file system.
  18. To stop the pool (cache) filling up you should set the Minimum Free Space as described here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. it looks like the cache drive is too small to store the things you have set it to store (appdata, system, domains).
  19. Look like your docker.img file is corrupt; Jun 14 10:57:00 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 64290816 wanted 502498 found 502494 ### [PREVIOUS LINE REPEATED 5 TIMES] ### Jun 14 10:57:02 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 60866560 wanted 502498 found 502497 ### [PREVIOUS LINE REPEATED 1 TIMES] ### Jun 14 10:57:02 Tower kernel: BTRFS warning (device loop2): iterating uuid_tree failed -5 Jun 14 10:57:02 Tower kernel: BTRFS error (device loop2): parent transid verify failed on 64339968 wanted 502498 found 502494 ### [PREVIOUS LINE REPEATED 1 TIMES] ### you should follow the steps here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page to recreate the docker.img file and re-install your docker containers with existing settings via Apps->Previous Apps. It might also be a good idea to run a memory check as RAM issues are one of the commonest reasons for btrfs file systems getting corrupted.
  20. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread so we can get a better idea of the current state.
  21. If you only have single parity then re-arranging them is easy as drive slot number is not part of the parity calculation. You just use the New Config tool; rearrange the drives; tick the parity is valid checkbox ; start the array. However parity2 DOES use the slot number as part of its calculation so rearranging drives breaks parity2.
  22. A short press of the power switch on the server is meant to initiate the standard Unraid shutdown sequence (as though you used the Shutdown button in the GUI).
  23. Scrub is available via the dialogue you get when clicking on a btrfs format pool or disk on the Main tab.
  24. It is one of the last settings in the first section just before the disk warning and critical thresholds
×
×
  • Create New...