Jump to content

itimpi

Moderators
  • Posts

    20,704
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Q1) Using Only is fine as long as you do not already have files in that share on the array. If you DO have files on the array then set it to Prefer; disable the VM service and Docker services under Settings; run mover to get the files moved to the cache; re-enable the Docker and VM services; (optional) change the Use Cache setting to Only. Q2) There is no way to explicitly set the division between VM and file caching. What you can set is the Minimum Free Space setting under Settings >> Global Share settings and when free space on the cache falls below this then files get written directly to the array. The recommended value is to be larger than the size of the largest file you are likely to write. Having said that since you do not have a parity drive (which is what limits speed writing to array drives) I wonder if you even need to bother caching file writes in the first place.
  2. Not surprising as they indicate a problem with the hardware
  3. You can always upgrade manually by downloading the zip file for the release from the Unraid site and extracting all the bz* type files overwriting those on the root of the flash drive. it is a good idea to first make a backup of your flash drive so you have the ability to revert if you want to for any reason.
  4. It is always possible to do a manual upgrade (or downgrade) by downloading the zip file for the release from the Unraid site and extracting all the bz* type files into the root of the flash drive overwriting the versions already there. it is a good idea to make a backup of the flash drive before doing this just in case you want to revert to your current release.
  5. You can always fall back to using the Manual (Legacy) method for creating the flash drive.
  6. Click on the drive on the Main tab and explicitly set the format to be XFS.
  7. It is always possible it dropped offline for some reason, and that power cycling the server will get it back.
  8. Only if you have no shares set to Use Cache= Prefer (or Only). As you have some set to Prefer then there will always be some space left used by these shares. The ones set to Prefer are ones you would normally WANT to be on the cache to maximise performance of docker container and VMs.
  9. Even if ZFS ever becomes supported, SHFS is going to stay as that is what provides the User Share level in Unraid.
  10. You do not need to use (and should.not use) the New Config option. You can simply Unassign the cache drives with the array stopped and then restart the array.
  11. The ‘Previous’ folder on the flash drive will hold the files associated with the last release. Copying these files back to the root of the flash drive and rebooting will restore that release. Before you do this you might want to get the system diagnostics zip file for the 6.8.3 release by running the ‘diagnostics’ command which will put the resulting file into The ‘logs’’ folder on the flash drive, and then post that file here. I do not believe there is any known issue with Intel NICs in the 6.8.3 release, so it is quite possible that something else is going on rather than your NIC simply not being detected and the contents of the diagnostics zip file might allow us to determine what that might be.
  12. What is the path for files that you want moved that are not being moved? For files to be moved to the array: The Use cache setting for the share must be set to Yes The files must not be open The files must not already exist on the array. You have a share which starts with M that satisfies point 1, but I am not sure if this is your problem one? it is worth noting that any share that has Use Cache = Prefer will mean files should be moved from array to cache. This is normally used for shares such as ‘appdata’ where the increased performance of holding them on the cache is desirable. Your ‘system’ share has files on both the disk1 and cache,. This share is normally used to support docker and VMs. If you want files associated with those services to be moved to the cache then you need the Docker and VM services to be disabled when you run mover to satisfy point 2.
  13. There is a separate request raised to tell the Samba service to read its new configuration without it going through a full restart (which on doing some googling seems to be a feature supported by Samba). I think that would be a better solution, but if for any reason it is decided that is not possible then a warning would be nice.
  14. Yes, my 3) should be 2) 6.2) this is done by clicking on the disk on the Main tab and on the resulting dialog you have the option to explicitly set the format you want. the defaults only come into play if you do not explicitly set a clue on an individual disk. On cache drives you are forced to use btrfs if there is more than one drive in the pool. Not sure what the default is for a single drive, but I would recommend setting it explicitly to play safe (especially as you want to change a btrfs formatted drive to XFS).
  15. I suspect it is a side-effect of moving onto later revisions of the Linux kernel and packages rather than Unraid specific code. I very much doubt that Limetech are ignoring this issue, but we will see when we get the next beta whether they have found anything.
  16. Looks correct. In Step 6) I would recommend choosing the option to keep current assignments, and then make the changes you want before starting the array as that is means you are less likely to make an error. if you intend to use disk 7 as a new data disk I would leave it assigned as that means parity will be built taking it into account.
  17. Nearly right 3) change the Use cache setting to Yes (needed to move files from cache to array) 6.1) change the number of cache slots to 1 (to stop you being forced to use BTRFS) 6.2) change the desired format for the remaining cache drive to XFS. You do not format it at this stage. 7.1) Format the cache drive (which will be showing as unmountable) unnecessary - you did this in step 1) 9) change the Use cache setting to Prefer (needed for files to move from array to cache) 11) (optional) change the Use cache setting from prefer to Only 12) re-enable docker & VM services
  18. The Linux ‘files’ command can indicate the probable type for each of those files if it has no extension.
  19. When you start the array after assigning the disk Unraid will repartition the disk to meet Unraid standards (normally destroying any access to any data already on the disk), and give you the option of formatting the drive to create an empty file system of the type you have selected for the drive with the default being XFS for array drives, BTRFS for cache drives.
  20. A not uncommon approach with the SSD being handled via Unassigned Devices plugin. you mention the 2x4TB HDD as being mirrored. Technically in Unraid terms that is one data drive plus one parity drive. In that special case parity IS actually a mirror of the data drive. However if you later added additional data drives then parity would no longer be a mirror although you would still be protected against a single data drive failing.
  21. It is much more convenient for those trying to help if you attach the complete zip file created by diagnostics rather than posting each indivdual file from the zip.
  22. I think you would have to do a ‘ls -l‘ command to get the permissions before running that command so we can see how they are wrong. That might give a clue as to how they get into that state in the first place.
  23. Do NOT format the disk unless you are happy to lose its contents. Instead you want to run a file system check/repair as outline here in the online documentation. The vast majority of the time this fixes the issue with no data loss.
  24. How are you moving the files? ?what folder/share are they in? Are you saying that running Docker Safe New Permissions is not fixing the problem? You should post your system diagnostics zip file (obtained via Tools >> Diagnostics) again so we can see what is going on.
  25. There is a better way to do what you want! First decide exactly which data disks you want to move to parity. N.B. I am assuming that you have valid parity? Stop array and Unassign one parity disk and one data disk Start array to commit these changes. At this point Unraid will say it is emulating the missing data drive. You should still be able to see all its contents just as if were still assigned. Stop array and assign parity drive you have just unassigned in place of the missing data drive Start array to rebuild the data drive by writing emulated drive contents to data drive (which was the parity drive). Keep the data drive you have just unassigned intact just in case anything goes wrong with the rebuild. After rebuild completes successfully stop array and assign old data drive to replace the parity drive and then start array to build parity In theory you could combine steps 4 and 5 to save time but then you would have no recovery option if anything went wrong with the rebuild of the data drive. Using two separate. steps means you keep the original data drive intact until you can confirm that the rebuild step completed successfully at the expense of additional elapsed time. After first parity/data drive swap has completed successfully you can repeat the procedure for the other parity drive and a different data drive. BTW: Although there is nothing wrong with having dual parity drives it is probably overkill for such a small array. You might be better of using the second parity drive as an additional data drive instead. Do you have good backups of all critical data on your array as parity should not be used instead of having backups?
×
×
  • Create New...