Jump to content

itimpi

Moderators
  • Posts

    19,859
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. You can install the plugin with the check already running. Not much point in stopping the current check as you would just need to start again from the beginning. The plugin is not responsible for initiating the operation (which is Unraid’s responsibility), it just adds better management of what happens after it is started. Yes, but it will run much slower than when the check is not running (and also slow down the Check while they are both running) due to contention on the drives. You should therefore avoid doing large amounts of writes to the data drives while a check/sync is running as doing them separately would end up being faster, but small amounts would be fine. When you add a drive, parity is not regenerated. Instead the drive is Cleared (if not pre-cleared) to avoid it affecting parity. You can find a discussion of Clear v Preclear here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
  2. Have you read this section of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page that covers using WireGuard? The Unraid OS->Manual section in particular covers most features of the current Unraid release.
  3. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread. It is always a good idea when asking questions to supply your diagnostics so we can see details of your system, how you have things configured, and the current syslog. The syslog in the diagnostics is the RAM copy and only shows what happened since the reboot. It could be worth enabling the syslog server to get a log that survives a reboot so we can see what happened prior to the reboot. The mirror to flash option is the easiest to set up, but if you are worried about excessive wear on the flash drive you can put your server’s address into the Remote Server field. Do you by any chance have the S3Sleep plugin installed? That has been known to spuriously shutdown servers.
  4. If you read that link carefully you will see that despite its name it is NOT for replacing an existing parity drive with a larger one. For that you simply follow the procedure for Upgrading parity disks.
  5. Parity sync time is not determined by how much data you think is there but by the size of the parity drive as it works at the raw sector level and works through every sector on the parity drive. My experience I’d something like 1-2 hours per TB of the largest parity drive size s not atypical. You might want to read this section of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page to get a better idea of how parity works. The Unraid OS->Manual section in particular covers most features of the current Unraid release. While a parity sync/check is active performance of the array is significantly reduced and since parity checks/syncs take so long with large modern drives you might want to consider using the Parity Check Tuning plugin to offload the check to running in increments outside prime time.
  6. You could use the User Scripts plugin to periodically set the permissions to Unraid default. Under the covers the New Permissions tool invokes the ‘newperms’ binary that can be used in a script.
  7. Then you would need to get the author of the container to provide a way to set permissions correctly. Typically this is done by exposing the unmask setting in the container
  8. Under the Peer Allowed IPs field you need to put entries for each subnet you want to be able to access via WireGuard.
  9. It is not clear if you intend to have a pool called Main or not or if you intend these files to be on the Unraid array? You should start by setting up User Shares for “Data”, “Download” and “Data”, specify where you want them stored as the Primary Storage setting, and then make sure that for these shares you have the SMB Export setting to something other than the default of “No” so they are accessible as shares across the network. It is not clear to me if you also need “Main” as share? If you are and you are using the main Unraid array then you could install the Unassigned Devices plugin and use its feature for a ‘root’ share and call it Main to expose the whole of the array as the “Main” share. if you are using a pool rather than the main array then you could call that pool “Main” and export that instead.
  10. Those are correct to get the user and group but is not sufficient. Is there anything that mentions Umask which is used to get the correct permissions.
  11. Have you tried running a scrub on disk2 (which is the drive the errors mention)? You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread. It is always a good idea when asking questions to supply your diagnostics so we can see details of your system, how you have things configured, and the current syslog.
  12. That shows there is an issue with the docker setting the permissions correctly, not with the owner being ‘nobody’ and the group being ‘users’ on a directory I would expect them to be drwxrwxrwx and on a file -rw-rw-rw- These are the permissions you would get if you ran Tools->New Permissions on the share to set them to standard Unraid defaults.
  13. What share is it that you think is a problem? None of the shares you have set up to move files from cache to array have any files on the cache so there is nothing for mover to do.
  14. I suggest you post the result of ls -l command to show the full permissions of the file in question. It could well be something other than the username that is the issue. i have all my Unraid files owned by ‘nobody’ and can access them with no issues from all my systems with no problems which is why I suspect some other issue.
  15. Not quite sure why you think you have a problem? The "nobody" user is the standard default in Unraid.
  16. Yes, and has been present for the several releases now.
  17. Not really as the Unraid approach to parity is very different to a traditional RAID system. As an example the Unraid array has each disk as a separate file system (that can be read on another system by itself if needed), and you can use a mix of the supported file systems in the Unraid array. It WOULD, however, allow for any 2 disks failing in the Unraid array and being able to recover their contents.
  18. No parity drive can be smaller than the largest data drive so you would need one of the 4TB drives to be a parity drive. there is a description of how Unraid parity works here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release.
  19. This would have also formatted the emulated drive thus wiping its data. You would have needed to run the procedure for fixing disk1 to make it mountable before attempting any format.
  20. You might need to check that your Minimum Free Space setting is causing this. We could probably tell if your posted your system’s diagnostics.
  21. @jmt380 So far I have not been able to replicate your issue when running a test on 6.12.9. Just in case it is something specific to the way you have your array setup perhaps you could let me have a screenshot of your Main tab so I can try and set up my test environment to mirror that as closely s possible. Alternatively a copy of your diagnostics would allow me to extract the needed information from there.
  22. @jmt380 Thanks for reporting this. I will have to see if I can recreate the problem on 6.12.9 and/or 6.12.10. I must admit I have not gotten around to explicitly tested against those releases thinking nothing in the Release Notes looked like it would break the plugin - but it looks like that might have been optimistic 😒. The fact it occurs every 7 minutes tells me it is the monitor task the plugin runs while a check is in progress. The CustomMerge file mentioned is a built-in Unraid one that I have made use of and it is possible something in it has changed that I need to take account of (or write a replacement for it so I no longer need it).
  23. I simply mounted the remote shares in Unraid using Unassigned Devices and then ran rsync from the Unraid end using the User Scripts plugin. You are going to need to sort out what needs doing for one machine to see the others shares. Cannot help with the details of doing that I am afraid as I have a different combination of end devices from you.
  24. I have used rsync to backup for backing up before. You should give more details about why it did not work. Have you successfully managed to get a network connection made between the two machines so that they can see each others shares?
  25. Was disk1 showing as unmountable before the rebuild (as a rebuild does not clear an unmountable status)? Have you followed the procedure documented here the in online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page for unmountable drives? The new disks showing as unmountable is expected until you format them from within Unraid to create an empty file system ready to receive files. However you do NOT want to do this while disk1 is also in the list or you will format disk1 as well thus losing its contents. Not sure how you could be looking at its contents since it is unmountable?
×
×
  • Create New...