Jump to content

itimpi

Moderators
  • Posts

    20,703
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Follow the standard procedure for replacing disks. With 2 parity drives you could do 2 drives at a time. You will need to do the parity disks first as no data drive is allowed to be larger than the smallest parity drive. i would recommend keeping the disk being replaced intact until you are happy the replacement has gone OK as that gives you some recovery options if it has gone wrong.
  2. Any disk that fails a SMART test should always be replaced
  3. You should provide your system diagnostics zip file (obtained via tools >> Diagnostics) attached to your next post to allow us a chance of giving you any informed feedback.
  4. You definitely do not want the SSDs to be part of the array for several reasons: Trim is not supported for SSDs in the array Write performance would be badly compromised as the SSDs would be slowed down by the need to update parity on every write. you may find that the best way to use SSDs that are not part of the cache is to use them as Unassigned Devices for use by VMs
  5. The global option is the default for all array and cache drives. You can override the default for any of these drives by clicking in it in the Main tab and setting a specific value for the drive you just clicked.
  6. The Split level setting just puts constraints on how related files can be split across drives (the default applies no constraints). The Allocation Method is the main determiner of how evenly files are spread. It might be a good idea to post your system diagnostics zip file (obtained via Tools >> Diagnostics) attached to your next post so we can get a better idea of how you have things set up.
  7. How Unraid splits data across drives is controlled by the combination of the Allocation Method, Split Level and Minimum Free Space settings for the share. You might want to read up on how these work and decide if you have the correct settings for the way you want things to work.
  8. Don’t worry about the others. Whatever is responsible for them will already be set up to re-establish them on a reboot.
  9. The easiest way to solve this is to: disable the docker and VMs services under Settings set the appdata and system share to Use Cache = prefer if they are not already set to that. start the mover from the Main tab. This will move files that should be on the cache to optimise performance from array to cache when mover finishes you can re-enable the docker and VMs services.
  10. In principle Unraid does not care where a drive is connected. However Marvel controllers do not play well with recent Linux systems (which underlies Unraid) and should be avoided as they are prone to randomly dropping drives for no obvious reason.
  11. Any change to the contents of an array drive would cause a spin-up, and this includes changes to directory entries. A delete operation is a form of write so WOULD definitely cause a spin-up.
  12. Yes, assuming that is where you want the data to finally reside.
  13. I agree the descriptive text could be improved It just needs a line added that says it can be run at any time to correct permission issues.
  14. Unraid does not care what mix of file system types you use as each disk is a self-contained file system. The sequence of events is: Empty the drive by copying its data elsewhere Stop the array Click on the drive on the Main tab and change the file system to the one you want. Start the array - the drive will show as unmountable Use the option to format unmountable drive on the Main tab. Make sure it is the drive you expect as the format erases the current contents and creates an empty file system of the desired type. Parity is maintained throughout as parity is file system agnostic and just sees a format as another write operation.
  15. You can run the New Permissions tool at any time to correct permissions issues.
  16. I was trying to suggest a change that would be simple to implement, require virtually no GUI changes, but still address the commonest mistakes that users make with the use Cache setting. That might make it easy to get it into a release in the near future, possibly even the 6.9 release. I do think on that basis that the “No” option should get the new behavior and the current behavior of “No” where mover does nothing for user share files on the cache (which is rather a special Use Case anyway) should get a new name. The redesign that you talk about sounds like something that would be much bigger change and although possibly desirable not likely to make any release in the near future I would think. I could be proved wrong though
  17. If you do not specify any Values for the include/exclude drives at the Global level, then all drives are available for use with User Shares. It is only when you restrict the drives at the Global level that they are not automatically offered at the user Share level.
  18. Strange - I cannot see any share called user_data in the diagnostics that you posted! is this by any chance a share that is not public and is restricted to your user? BTW: There appear to be a inordinately large number of config files for shares If these all correspond to shares that are actually on your system I would be surprised. I suspect at some point you accidentally created a lot of top level folders under /mnt/user (which automatically get treated as user Shares) that are no longer there? Perhaps you should delete from config/shares on the flash drive any files that do not correspond to shares you actually have.
  19. This actually points out that we could do with another intermediate category called something like “Major” meaning it is very important but is not actually stopping the server working or directly causing data loss. I would then put this into the “Major” category rather than “Urgent”. I certainly agree it needs to be more than “Minor”.
  20. We seem to be getting an increasing number of cases where users have set Use Cache to No, but files for a share end up there anyway due to the way Linux handles move as a rename or the way a docker container is configured. A new setting option for Use Cache might help avoid this. The behavior would be similar to the current “No” setting for new files, but when mover runs it WOULD move any files it finds belonging to the share from cache to array. i could not think of a short name for this mode The best I could come up with is “clear” but maybe someone will have a better idea? The other way would be to make the “no” setting work in this way as it is what most users seem to think “No” means, and rename the current “No” behaviour to something like “Keep” to indicate any files already on cache are kept there. Any thoughts?
  21. If you do not explicitly specify any include/exclude disks at the Global Share level then all drives ARE automatically included.
  22. I very much doubt that you are going to get anyone to try and work through the information you posted as it is unformatted and not amenable to being analysed. You should include such information as an attached file (which hopefully retains its original formatting) if you want anyone to try and give you any useful feedback.
  23. That ran a check only as you did not remove the -n flag (which is what the last line tells you), but the output looks good as no serious corruption is being reported. Rerun without the -n flag and add the -L flag to actually repair the file system. when you have done that stop the array, and restart in normal mode and now you should find the disks mounts normally.
  24. I suspect the ‘emulated’ disk was showing as unmountable before doing the rebuild? The rebuild process does not fix file system corruption - you need to follow the procedure shown here in the online documentation to fix file system level corruption.
  25. Rather than trying to solve the utilisation issue, it might be easier and quicker to: stop the docker service delete the current docker image set the size you want for Docker.img restart the docker service to create a new empty docker.img use the Previous apps option on the Apps tab to reinstall the containers you now want with all their settings intact. that does not mean that knowing the answer to the original question is not of value For future reference.
×
×
  • Create New...