Jump to content

itimpi

Moderators
  • Posts

    20,699
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. It IS already supplied It is located at /usr/src/linux-5.10.28-Unraid/.config for the current unRaid release.
  2. Did you figure out why that folder was created in the first place? If not it is likely to come back if you reboot.
  3. The Docker.img file will contain the binaries for any docker container plus any paths not mapped to be external to the container so having it on the array will definitely keep read/whites to the array occurring. you may find this section of the online documentation accessible via the Manual link at the bottom of the GUI useful.
  4. No other than the fact of any stress put on the data drives. Building parity is a read-only operation as far as the data drives are concerned. You probably first want to check the problem drive is mountable via UD as if a file system repair is going to be required then now is the time to do it before the New Config + parity build.
  5. Do NOT add it as a new data drive as that would cause unRaid to ‘Clear’ it (by writing zeroes to every sector) before adding it to the array and thus erasing its contents. From your description this is NOT what you want to happen? It would also then leave you with the problem of how to remove the drive from the array once you had copied files off it. Instead simply mount the drive using Unassigned Devices and that will make it available for copying off the files onto the disks you will be keeping in the array. After copying the files you can then simply unmount it without affecting the array contents and remove it. I would recommend checking the drive can be mounted using UD before attempting the New Config + parity rebuild you mention justo make sure it will mount OK just in case there is some additional action (e.g a file system repair) required to get UD to mount it successfully. it is also worth mentioning that there may be nothing much wrong with the disabled disk if it was simply a cable problem that caused it to be disabled by triggering a write error as that is by far the commonest reason disks get disabled.
  6. Upgrading the size of a drive requires rebuilding the drive contents onto the larger one. In theory you could do both at. the same time with dual parity but then you would not be protected against another drive failing while doing this. From your statement I assume you want to remain protected against another drive failing during this process,? If so it would simply mean doing each of them in turn. That means it will take twice as long but I assume that time is not the key factor? You should also ensure you have valid parity on both parity disks before attempting this. I have just checked the online documentation and I notice that it does not explicitly have a section to. cover rebuilding a disk that has not failed because you simply want to i replace it with a larger disk. I will add this later today to make it clear that is just a variant of the rebuild you do if the disk to be replaced had instead failed.
  7. no way to know for certain then, but typically the list+found folder are where folders/files end up if their directory entry cannot be found. With only one file there it does not sound as if you had serious corruption. You can use the Linux ‘file’ command on the one in lost+found to help identify its type and thus what program to check it with. do you have backups you can use to check against with what is on unRaid?
  8. Not for certain if you do not have a list of what was there before. However, if no lost+found folder was created on the drive then it is highly likely that nothing was lost.
  9. The best performance is to use a SSD in a pool (so it is not part of the array), and then unpack to a share with a Use Cache = Yes setting. That way the files stay on the pool during the download and any unpack process (keeping array disks spun down), and then gets moved to the array when mover runs (typically overnight).
  10. What do you think should be shown in the GUI? Vdisks created for VMs are already sparse under unRaid.
  11. Have you tried clicking on a drive on the Main tab and setting the value you want from there?
  12. NO! If you issue a format that will erase the contents of the disk and update parity to reflect this. You should run the xfs_repair command without -n and with -L to repair the file system.
  13. Each plugin will have a .plg file on the flash drive under config/plugins. It is this file that triggers a plugin to be installed during the boot process so renaming this file to have a different extension and then rebooting will stop such plugins from installing.
  14. A pool is (potentially) part of a share when you select that pool within the share setting. Whether it actually gets used depends on what option you select from the Use cache pool setting.
  15. Chances are that if it is a new motherboard you will need to be on the unRaid 6.9.x releases to have driver support for your motherboard’s NIC.
  16. I always treat the fact that a SMART report that says passed is not definitive (whereas a failure probably is). I would not be happy with such a large number of reallocated sectors as they definitely indicate that there have been a lot of read failures in the past. Technically if that number stays stable you may be OK but definitely a disk to consider retiring.
  17. You should be very wary about changing permissions inside appdata as each container may have its own special requirements around permissions.
  18. This is documented here in the online documentation.
  19. Using the Yes setting means that you only want the files temporarily on the cache - but want them to end up on the array. Is this what you really want with a large cache drive? if you want the final residing place to be the cache then you use Prefer. The ‘yes’ setting is often misinterpreted but it is a hang-over from the fact that originally only the Yes and No settings existed. I think if we were starting with a clean sheet we would probably invert the meaning of the Yes and Prefer setting but at this point it is probably not practical as it would cause too much disruption.
  20. There is also the fact that the "prefer" setting works even if you do not yet have a cache drive and then later when one is added content gets moved to the cache without further user action. However it does cause a problem if a cache temporarily disappears for some reason as then you can end up with duplicates of files in the system share such as the docker image. Not sure I can see how this could be avoided though.
  21. I'll amend the bit I added to the documentation to mention that you can use 'start' if you have already created the container, and use run if you want to either redownload the container or change a setting. I think for practical purposes that is the difference? I perhaps also need to explain how the image name is set up and used?
  22. Checked the diagnostics posted earlier and as far as I can see there is no share with Use Cache=Yes setting which is the one required for files to move from cache to array. Have I missed something? Which share do you think is not behaving as expected? I would suggest turning on the Help in the GUI for this setting to see how the various options work and how they affect mover. Turning on mover logging might also help with seeing what is going wrong.
  23. Since I do not know of any specific plans to add this feature to unRaid I have added a section here on what can be done in the meantime to easily set up such a schedule (in the Docker section of the online documentation that can be accessed via the Manual link at the bottom of the unRaid GUI). I would be interested to know if this is considered sufficient in the short term or more detail is needed?
  24. How files are distributed depends on the settings you have set for your User Shares. you should post your system’s diagnostics zip file (obtained via Tools -> Diagnostics) so we can see how you have things set up if you want any sort of informed feedback.
×
×
  • Create New...