Jump to content

itimpi

Moderators
  • Posts

    20,707
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Glad you found the advice useful If you want to remove the (very remote) possibility of a parity check notification being blocked after an unexpected reboot then your plugin installation logic (which would be run as part of the boot process) could remove the timestamp flag if it exists. In fact if that flag is being stored in RAM rather than on the USB stick that would already be the default behaviour.
  2. I know where the configuration entry is stored, but changing this sounds a little dangerous in case something goes wrong trying to change it back. Another possibility would be to intercept all calls to the not9fy script. If you want to follow up on either of these it may best be done via PM?
  3. Fair enough - just thought it was worth checking in case you were having a blindspot for a simple error. I do no use that feature so I have no other ideas for what might be causing your problems.
  4. Actually by default it is not! The /lib/firmware location is an exception to the norm as it is mounted via the 'loop' device from the image held on the USB stick so in theory in can be updated and survive a reboot. However there is no free space in the image so updating it may easily fail. Even if you did successfully make such an update it would be lost when Unraid is updated and the bzfirmware file on the USB stick is replaced with an updated version. It was also stated that a umount should be run on /lib/firmware which would then preclude the image on the USB stick from being updated
  5. The second last screenshot shows you have set up the cpu pinning selections but have not yet hit the Apply button to commit them. Are you sure you are not omitting that step?
  6. It was OK to do this. A format is just a form of write operation and the parity build process can handle writes occurring to the array while it is running. Such writes will slow down the parity rebuild process (and the write operation) but in the case of a format this would only be by a matter of minutes but larger writes have more impact. There is also the fact that if a data drive fails while building the initial parity the contents of the write could be lost. No harm is done. The reason is all about failure of a data drive while attempting to build the new parity. If the following conditions are met: The old parity drive is kept intact. No data is written to the array while attempting to build parity on the new drive. then these is a process (albeit a little tricky) where the old parity drive can be used in conjunction with the 'good' data drives to rebuild the data on the failed data drive. It is basically a risk/elapsed time trade-off and the recommended process minimizes risk at the cost of increasing elapsed time. This is only done if you already have parity fully built. It is not required if you are in the process of building initial parity (or have no parity disk). This is because you were running the parity build process. In such a case whatever is already on the disk is automatically included in the parity calculations so it is not a requirement that the disk contain only zeroes.
  7. you need to distinguish what the core Unraid system does and what the plugin does. Core Unraid seems to reset for each increment, but when the plugin is handling the pause/resume process it keeps track of how each increment went and then on completion it calculates the figures for the run as a whole and this is what ends up in the history.
  8. Not quite sure what you are asking? If you are talking about the parity history then the plugin has always updated it to be correct for the run as a whole - not just the last increment. It was already doing this for duration, speed etc, but there was a bug (now fixed) where this was not being done for the error count. While writing this it has just occurred to me that the parity check history does not currently show whether a run was correcting or non-correcting - would it be useful to have this displayed in the history as well? It should be an enhancement I could make to the plugin easily enough.
  9. The plugin should low update the error count in the parity history for any future checks. In addition I have exposed some functionality at the CLI level that some may find helpful. Usage: parity.check <action> where action is one of pause Pause a rumnning parity check resume Resume a paused parity check check Start a parity check (as Settings->Scheduler) correct Start a correcting parity check nocorrect Start a non-correcting parity check status Show the status of a running parity check cancel Cancel a running parity check I would be interested any feedback on this feature.
  10. I have now tried several open source browser based file mangers on Unraid and so far I have not yet managed to find one that gives anything like acceptable performance. Not sure why they perform so badly but it does seem to be a consistent pattern.
  11. Have you actually GOT a partition 3 even created on the disk? The error message suggests that may not be the case.
  12. Have you enabled Destructive mode in the UD settings? You need this to be allowed to format.
  13. The first post of any thread where LimeTech announce a new release normally contains the release notes for that release.
  14. That is not at all unusual! You need to provide the -L flag to get past this. Normally despite the warning there is no resulting data loss.
  15. It might be worth checking what disk(s) contain a folder DropboxTest. There should only be one. Having more than one would definitely confuse things as you would not know which one had the Dropbox image file mounted on it. You might want to explicitly specify a disk drive (e.g. /mnt/diskX) rather than ‘/mnt/user’ for the location (where I was using /mnt/cache) as that gives you more control. It will still show up under /mnt/user as all top level folders on any drive show up there.
  16. Ignore the Warning messages - they appear to be part of normal operation. The only one you need to watch out for in my experience is whether it says it is no longer linked so you need to repeat the relink process. Not sure if there is an easy way to tell if it has finished syncing other than the fact that the number of files (and space occupied by them) stops increasing. In my experience it ‘just works’ so I do not need to do anything else about it. I tend to not be syncing very large files so there is not normally much delay in propagation. It can be useful to have another device (I have an iPad) that is connected to your Dropbox account via the web interface if there is any particular file of concern. If it shows up in the web interface and not on the Unraid server it is probably syncing. i do not know if there is anything that could be done at the container level to get the sync status. Not knowing enough about how the internals of DropBox works this is not something I have ever looked at. I was just very glad to have it syncing to my Unraid server. regarding the SMB side you will now have a User Share called Dropbox. Just set the SMB settings like you would for any other share on Unraid.
  17. That was were the docker image is mounted, so yes that is where I pointed the container for storing dropbox files. This then appears by default as the User Share 'Dropbox' so you can access the contents from other machines. I pointed the config mount point For the container to /mnt/cache/appdata/dropbox. Seems to work fine as far as I can tell. I have been running like this for some months now. When you first start the docker container you need to open the container's log file to get the URL to link it to your Dropbox account. Also if you are using a free dropbox account you have to make sure that you have not exceeded the limit of 3 devices that is now imposed on free accounts. Yes. If you do that you should find that any existing files in your Dropbox account get synced down to the Unraid server. I set it up as a script both so I could remember the steps and also to automate steps on starting/stopping the array, but there is nothing stopping you from running each command manually the first time around to get a feel for it. Feel free to ask further questions and supplest any improvements I could make to the scripts.
  18. You might be interested in this post which shows how I have Dropbox working with this container
  19. Powerdown is Unraid specific and triggers the same processing as using the Powerdown button from the GUI. The shutdown command is a Linux command that can bypass many of the steps that powerdown runs.
  20. For general reference the diagnostic information that @vw-kombisupplied showed that the plugin is not updating the history correctly when errors were reported/corrected during the parity check. The history will always show 0 errors until I issue a corrected version of the plugin.
  21. My suspicion is that the corrected count displayed is the true value and that there is a bug in the plugin with updating the history correctly with the results when errors are corrected during the run. Can you look to see if you have the file 'config/plugins/parity.check.tuning/parity.check.tuning.progress.save' on your flash drive. Sending me a copy of this file should allow me to see what happened at each phase of the last check and confirm what the true count should be. Ideally I would also like a copy of the 'config/parity-checks.log' file off the flash drive so I can run a test against the actual history data from your system. If you do not want to post these files publicly you san send them via forum PM. In the meantime I will look at the plugin code to see if I can spot the problem.
  22. There is no GUI support for this as far as I know. It should be easy enough to do it from the command line. I am reasonably certain the required command will be: mkfs.xfs nvme0n1p3 although you might want confirmation from someone else to be safe.
  23. I would have thought that a solution that issued the ‘powerdown’ command at the CLI level would be the easiest way to do this?
  24. Unlikely as HDD type drives cannot exploit the extra speed.
  25. With those settings I would expect all of those shares except the last to show orange as they are on the cache drive which being a single drive has no redundancy. It is quite normal to have those shares on the cache drive for performance reasons and then periodically back up any important files to somewhere on the array. the last one would be because you have written files to it since mover last ran. When mover runs any files for that share on the cache (assuming they are not open in an app) will be moved to the array and the icon change to green.
×
×
  • Create New...