Jump to content
Squid

[Plugin] Fix Common Problems - Beta [THREAD NOW CLOSED]

54 posts in this topic Last Reply

Recommended Posts

The .Recycle.Bin is not really a share, it is a folder created by SMB when a file is deleted.  It is not defined as a share to unraid.  If you have the disks and cache set up as shares, and a file is deleted on the disk or cache share, SMB will create a .Recycle.Bin folder.  The problem is that I think you see anything at the /mnt/diskx or /mnt/cache level as a share, even if it is not defined as a share.  For example if you have the cache sharing turned on:

 

/mnt/cache

/mnt/cache/.Recycle.Bin

/mnt/cache/appdata

/mnt/cache/domains

 

could all possibly be seen as shares.

 

I installed this plugin just to give it a go, so I think I understand what you are doing.  I don't understand what the problem is with the .Recycle.Bin, except I think you are flagging it as a share that is not defined?  Maybe all dot folders should be exceptions, or just the .Recycle.Bin.

 

I would make an exception on the .Recycle.Bin folder.

Already fixed where I ignore any hidden shares.

 

unRaid treats all top level folders as a share regardless of whether they are defined or not (and I look at top level folders within /mnt/cache and /mnt/user0).  If they are not defined, they are implied cache only because mover will not move any files contained on the cache drive within those folders to the array.  Hence why the suggestion that pops up when a implied cache share exists on the array and on the cache drive is to actually define where its supposed to be.

Pretty sure if you check the settings for a share that you haven't defined, unRAID considers it implied cache NO, which means it won't move from cache, but writes will go to array.

 

Share this post


Link to post

Attached pic shows a small issue, running v6.1.9, User Shares turned off, and Dockers currently off too.  Trying an earlier version showed no errors, just the same 3 auto-update notifications.  Otherwise fine.

You just always gotta be different, don't you  ;)  EDIT: should be fixed now on the next update, but side affect is that disabling user shares will also effectively disable the checks for similar named shares (MyShare and myshare) but probably won't be a big problem.

 

This test took only a few seconds.  As a data point, the earlier test on my system took about a minute and a half, perhaps because it has to spin up all drives?  I'm not sure that has to be mentioned, but it's useful to know, that all your drives will be spun up.
Correct.  Will add it to the pop up

 

My auto-update philosophy - All software releases and updates are minefields, so it's always best to let someone else step first.  It doesn't matter who's the author or whether it's a beta or a final or a simple point release.  And since there are always others happy to jump in ahead of me, I'm happy to let them!  I prefer auto-updates always turned off, and, depending on the complexity of the release, past history of the author, and the level of risk, I wait from a day to a month, observing the carnage.  When I feel like it, I update.  I suspect most computer veterans feel that way.  But that doesn't mean that I think auto-updates are not good for others.  After all, I do need them to find the mines!

I have a similar philosophy but,

 

Here's my justification for those errors: (but as I said, this will be disableable or moved to a warnings section)

 

webUI:  By and large any and all updates here are bug fixes and highly recommended if not required.

CA: This plugin affects running of applications and due to it scraping of data supplied by third parties, it can and does go down every once in a while due to unexpected data changes.

This particular plugin: Whether running beta or not, as new checks are added I want to make sure that all installations get those additional checks.  Otherwise IMHO the plugin is rather useless.

 

The above checks are only actually perfomed if CA is installed.

 

Dynamix plugin update check is required for any of this to work so its also flagged as an error if its not enabled.

 

The above three plugins are the only items which I foresee generating an error regarding autoupdate.  Since CA is outright required to actually do the autoupdate, its flagged as an error if its not installed (will be moved to a warnings section however)  Mind you, neither LT / bonienl / myself are perfect and occasionally regression errors get introduced in updates, but the next autoupdate would tend to fix it.  That and also keep in mine that CA does not and will never auto update the base OS.  ever.

 

On a related note, Powerdown is flagged as an error (and will remain to be) if its not installed simply because it should be installed on each and every machine out there no matter what.  (Unless someone can justify to me why it should NOT be installed)

 

For the ultimate background checks, warnings will have separate notification levels than errors so it won't be an issue with the user's being hounded to autoupdate or install CA or what not

 

Share this post


Link to post

The .Recycle.Bin is not really a share, it is a folder created by SMB when a file is deleted.  It is not defined as a share to unraid.  If you have the disks and cache set up as shares, and a file is deleted on the disk or cache share, SMB will create a .Recycle.Bin folder.  The problem is that I think you see anything at the /mnt/diskx or /mnt/cache level as a share, even if it is not defined as a share.  For example if you have the cache sharing turned on:

 

/mnt/cache

/mnt/cache/.Recycle.Bin

/mnt/cache/appdata

/mnt/cache/domains

 

could all possibly be seen as shares.

 

I installed this plugin just to give it a go, so I think I understand what you are doing.  I don't understand what the problem is with the .Recycle.Bin, except I think you are flagging it as a share that is not defined?  Maybe all dot folders should be exceptions, or just the .Recycle.Bin.

 

I would make an exception on the .Recycle.Bin folder.

Already fixed where I ignore any hidden shares.

 

unRaid treats all top level folders as a share regardless of whether they are defined or not (and I look at top level folders within /mnt/cache and /mnt/user0).  If they are not defined, they are implied cache only because mover will not move any files contained on the cache drive within those folders to the array.  Hence why the suggestion that pops up when a implied cache share exists on the array and on the cache drive is to actually define where its supposed to be.

Pretty sure if you check the settings for a share that you haven't defined, unRAID considers it implied cache NO, which means it won't move from cache, but writes will go to array.

Just a different way of looking at it.  I think of it that since mover won't touch it if its on the cache that its implied cache only.  Same thing  just semantics.  Either way its flagged if the folder exists on the array and on the cache because at that point its just not correct because of either your reasoning or mine.  (and in my reasoning, the cache drive has the ability to completely fill up because of writes to the share)

 

Share this post


Link to post

Cool, so you use the tool to fix itself 8)

And that's my entire point right there!

Share this post


Link to post

The .Recycle.Bin is not really a share, it is a folder created by SMB when a file is deleted.  It is not defined as a share to unraid.  If you have the disks and cache set up as shares, and a file is deleted on the disk or cache share, SMB will create a .Recycle.Bin folder.  The problem is that I think you see anything at the /mnt/diskx or /mnt/cache level as a share, even if it is not defined as a share.  For example if you have the cache sharing turned on:

 

/mnt/cache

/mnt/cache/.Recycle.Bin

/mnt/cache/appdata

/mnt/cache/domains

 

could all possibly be seen as shares.

 

I installed this plugin just to give it a go, so I think I understand what you are doing.  I don't understand what the problem is with the .Recycle.Bin, except I think you are flagging it as a share that is not defined?  Maybe all dot folders should be exceptions, or just the .Recycle.Bin.

 

I would make an exception on the .Recycle.Bin folder.

Already fixed where I ignore any hidden shares.

 

unRaid treats all top level folders as a share regardless of whether they are defined or not (and I look at top level folders within /mnt/cache and /mnt/user0).  If they are not defined, they are implied cache only because mover will not move any files contained on the cache drive within those folders to the array.  Hence why the suggestion that pops up when a implied cache share exists on the array and on the cache drive is to actually define where its supposed to be.

Pretty sure if you check the settings for a share that you haven't defined, unRAID considers it implied cache NO, which means it won't move from cache, but writes will go to array.

Just a different way of looking at it.  I think of it that since mover won't touch it if its on the cache that its implied cache only.  Same thing  just semantics.  Either way its flagged if the folder exists on the array and on the cache because at that point its just not correct because of either your reasoning or mine.  (and in my reasoning, the cache drive has the ability to completely fill up because of writes to the share)

This is why people used to get their appdata moved from cache, because mover used to move anything that wasn't cache only. Mover was changed so it now only moves cache Yes. The default setting has always been cache No.

 

Not really the same as implied cache only, because if you write to the share it will not write to cache, so you can't fill up cache by writing to a share you haven't defined. Like all user shares, reads will include all disks including cache.

Share this post


Link to post

Need to ignore .Recycle.Bin share settings - as we can not change anything about its share plugin settings

I use that plugin but I don't have that share. Are you sure you should have it? Maybe it is left over from an old version of the plugin.

 

Isnt it set in the smb-extra.conf?

 

Myk

Share this post


Link to post

The .Recycle.Bin is not really a share, it is a folder created by SMB when a file is deleted.  It is not defined as a share to unraid.  If you have the disks and cache set up as shares, and a file is deleted on the disk or cache share, SMB will create a .Recycle.Bin folder.  The problem is that I think you see anything at the /mnt/diskx or /mnt/cache level as a share, even if it is not defined as a share.  For example if you have the cache sharing turned on:

 

/mnt/cache

/mnt/cache/.Recycle.Bin

/mnt/cache/appdata

/mnt/cache/domains

 

could all possibly be seen as shares.

 

I installed this plugin just to give it a go, so I think I understand what you are doing.  I don't understand what the problem is with the .Recycle.Bin, except I think you are flagging it as a share that is not defined?  Maybe all dot folders should be exceptions, or just the .Recycle.Bin.

 

I would make an exception on the .Recycle.Bin folder.

Already fixed where I ignore any hidden shares.

 

unRaid treats all top level folders as a share regardless of whether they are defined or not (and I look at top level folders within /mnt/cache and /mnt/user0).  If they are not defined, they are implied cache only because mover will not move any files contained on the cache drive within those folders to the array.  Hence why the suggestion that pops up when a implied cache share exists on the array and on the cache drive is to actually define where its supposed to be.

Pretty sure if you check the settings for a share that you haven't defined, unRAID considers it implied cache NO, which means it won't move from cache, but writes will go to array.

Just a different way of looking at it.  I think of it that since mover won't touch it if its on the cache that its implied cache only.  Same thing  just semantics.  Either way its flagged if the folder exists on the array and on the cache because at that point its just not correct because of either your reasoning or mine.  (and in my reasoning, the cache drive has the ability to completely fill up because of writes to the share)

This is why people used to get their appdata moved from cache, because mover used to move anything that wasn't cache only. Mover was changed so it now only moves cache Yes. The default setting has always been cache No.

 

Not really the same as implied cache only, because if you write to the share it will not write to cache, so you can't fill up cache by writing to a share you haven't defined. Like all user shares, reads will include all disks including cache.

Unless you are writing directly to /mnt/cache/share.

 

Net result is still the same. 

 

If its not defined, it should only exist on the array as that's where writes to /mnt/user will go.

But if its not defined, it should not exist on the cache drive because mover won't touch it.

 

And that's a configuration / user  error

 

EDIT: Changed the error to be implied array only...

Share this post


Link to post

Here's my justification for those errors: (but as I said, this will be disableable or moved to a warnings section)

About the auto-update notifications, I agree with you.  I'm completely OK with seeing those notifications every time.  In fact, I'd put a low priority on any ability to disable them.

Share this post


Link to post

Have you considered writing any problems detected to the syslog?  And possibly any actions taken to resolve issues.

 

Another idea, a non-interactive mode that could be called by the Diagnostics collection process (or anyone), displays nothing, takes no actions, just writes any findings to something like problems.txt for the diagnostics zip.  Saves time for support people.

Share this post


Link to post

Have you considered writing any problems detected to the syslog?  And possibly any actions taken to resolve issues.

 

Another idea, a non-interactive mode that could be called by the Diagnostics collection process (or anyone), displays nothing, takes no actions, just writes any findings to something like problems.txt for the diagnostics zip.  Saves time for support people.

That's easy enough for dlandon / bonienl to implement.  (And an awesome idea - I hate scanning through syslogs trying to figure out if a drive is mounted read only or not)

 

If /user/local/emhttp/plugins/fix.common.problems/scripts/scan.php exists, execute it

 

Then save the contents (or ideally parse it to remove the embedded html buttons) of /tmp/fix.common.problems/errors.json as part of the diagnostics or as a separate file.  (if it doesn't exist, no errors found by the scan)

 

Never thought about the syslog, but it is a great idea and now implemented.

 

 

btw, here's the further checks I've got in but not released yet:

 

Check for each docker app's /config folder mapped to a disk (or cache) and not to /mnt/user/appdata...

Check for /var/log > than 50% full

Check for docker.img > than 80% full

Check for rootfs > 50% full

Check for date and time to be within 5 minutes of actual time

 

Share this post


Link to post

I really like this plugin!

Long time users may have forgotten some of the customizations they made in the past so here are some ideas to help identify/prevent problems:

  • Have an option to display the contents of go script, as a reminder of what is there. Provide guidance on replacing specific lines with a plugin.  i.e.:
      - "modprobe <sensor>" and "/usr/bin/sensors -s" should be replaced with the Dynamix System Temp plugin
      - DailyCacheTrim should be replaced with Dynamix SSD TRIM
     
  • Display all files in /boot/extra/, as a reminder of what is there. Provide guidance on which ones can be replaced with the Nerd Tools.  Looks like you are already flagging 32 bit files, very cool.
     
  • Show alert if there are known problem plugins in the /boot/config/plugins/ directory. i.e. recommend replacing SNAP with UD.  If you can detect v5 plugins, highlight those as definite issues.

Share this post


Link to post

I really like this plugin!

TLDR those threads, but what's the issue.  If the share setting is correct, there's no issue that would be flagged.

Long time users may have forgotten some of the customizations they made in the past so here are some ideas to help identify/prevent problems:

  • Have an option to display the contents of go script, as a reminder of what is there. Provide guidance on replacing specific lines with a plugin.  i.e.:
      - "modprobe <sensor>" and "/usr/bin/sensors -s" should be replaced with the Dynamix System Temp plugin
      - DailyCacheTrim should be replaced with Dynamix SSD TRIM

Possible.  but down the road, in the distance

 

  • Display all files in /boot/extra/, as a reminder of what is there. Provide guidance on which ones can be replaced with the Nerd Tools.  Looks like you are already flagging 32 bit files, very cool.

Merely having files in /boot/extra or /boot/packages is not either an error or a warning, as there are very valid use cases for both of those.

 

32 bit files are flagged if the file name contains i385 or i486

  • Show alert if there are known problem plugins in the /boot/config/plugins/ directory. i.e. recommend replacing SNAP with UD.  If you can detect v5 plugins, highlight those as definite issues.

Already thought of that (actually many months ago with regards to CA).

Was going to actually start on that shortly, but wanted to switch gears and get the background checks, notifications, etc working.

Share this post


Link to post

Hi,

 

I get the Dynamix WebUI not set to auto update but there is no AutoUpdate available (I am on 6.1.8 )

If it is available in 6.1.9 or 6.2 then there is no big issue, I just need to upgrade.

Share this post


Link to post

You have to have CA installed for the auto update settings

 

Sent from my LG-D852 using Tapatalk

Edit:  your probably using an older version of CA.  Update it first

Share this post


Link to post

Hi,

 

I get the Dynamix WebUI not set to auto update but there is no AutoUpdate available (I am on 6.1.8 )

If it is available in 6.1.9 or 6.2 then there is no big issue, I just need to upgrade.

ok.  just tested this with 6.1.9 with the stock webGUI.  Its a bug.

 

CA doesn't give you the option to upgrade the webUI if dynamix.plg doesn't exist.  But it doesn't exist until an update to the webUI actually is installed (and then for subsequent updates it works fine).  I'll think about it overnight

 

Share this post


Link to post

Updated:

 

Errors now logged into syslog.  Fix RobJ's unique setup of no user shares and no docker returning errors

Separate warnings from errors.  Start working next on the background tasks and some prettying up and hopefully will be at RC stage

 

Checks are now this:

 

Generating Errors

 

Implied Array Only share having files on cache drive

Cache Only share having files on array

Array Only share having files on cache drive

Plugin Update Check not enabled

This plugin (fix common problems) not set to autoupdate

Similar named shares differing by case (MyShare, myshare)

Powerdown not installed

Server unable to communicate to outside world

Unable to write to array disks or cache drive

Unable to write to flash drive

Unable to write to docker image

Any disk disabled

Any disk missing

Any read error on a disk

Any file system error on a disk

This plugin (fix.common.errors.plg) not up to date (displayed only if not set to auto update)

Any 32 bit package found within /boot/extra or /boot/package

/var/log more than 80% full (warning at 50%)

docker image file more than 90% full (warning at 80%)

rootfs more than 90% full (warning at 75%)

Any share having the same disk set in both included and excluded disks

Global share settings having the same disk set in both included and excluded disks

 

 

Generating Warnings

 

CA not set to auto update itself

Dynamix WebUI not set to autoupdate

CA not installed

Default docker appdata location set to be /mnt/user/... (this is a 6.2 thing only)

Default docker appdata location not a cache only share (if cache present, and 6.2 only)

Any SSD part of the array

Any installed plugin not up to date (if its not set to auto update)

Any docker application with an update available

Any docker application with its /config volume mounting set to be /mnt/user/appdata/...

/var/log more than 50% full (error at 80%)

docker image file more than 80% full (error at 90%)

rootfs more than 75% full (error at 90%)

date and time on server differ by more than 5 minutes from the actual date and time

scheduled parity checks disabled

Any share having both included and excluded disks set

Global share settings having both included and excluded disks set

 

Share this post


Link to post

TLDR those threads, but what's the issue.  If the share setting is correct, there's no issue that would be flagged.

 

Well, the way we're using the share does look incorrect.  Basically, we disable mover by setting the share to cache-only (or cache disabled, depending on where you want new files to be written).  Then we manually put older (or lesser used) files on the array and newer (or more likely to be used) files on the cache.  The user share makes the actual placement of the files transparent to applications.

 

In the first example, jonp keeps a bunch of games for his VM on the array and then manually moves the game he wants to play onto his SSD cache drive for fast performance.

 

In the second example, I store recent CrashPlan backup files on the cache and move older ones to the array.  This allows me to backup to unRAID without spinning up the array, and without filling my cache.

 

The plugin considers this type of thing to be an error, so I just want to make sure it doesn't automatically "fix" it.

 

  • Display all files in /boot/extra/, as a reminder of what is there. Provide guidance on which ones can be replaced with the Nerd Tools.  Looks like you are already flagging 32 bit files, very cool.

Merely having files in /boot/extra or /boot/packages is not either an error or a warning, as there are very valid use cases for both of those.

 

The recommendation seems to be to use plugins whenever possible, rather than manually editing the go script or placing files in /boot/extra/.  So I was thinking you could flag any packages that were placed manually, if they are packages that Nerd Tools is setup to manage.

 

 

yada yada yada

Possible.  but down the road, in the distance

 

Fair enough :)

Share this post


Link to post

TLDR those threads, but what's the issue.  If the share setting is correct, there's no issue that would be flagged.

 

Well, the way we're using the share does look incorrect.  Basically, we disable mover by setting the share to cache-only (or cache disabled, depending on where you want new files to be written).  Then we manually put older (or lesser used) files on the array and newer (or more likely to be used) files on the cache.  The user share makes the actual placement of the files transparent to applications.

 

In the first example, jonp keeps a bunch of games for his VM on the array and then manually moves the game he wants to play onto his SSD cache drive for fast performance.

 

In the second example, I store recent CrashPlan backup files on the cache and move older ones to the array.  This allows me to backup to unRAID without spinning up the array, and without filling my cache.

 

The plugin considers this type of thing to be an error, so I just want to make sure it doesn't automatically "fix" it.

 

  • Display all files in /boot/extra/, as a reminder of what is there. Provide guidance on which ones can be replaced with the Nerd Tools.  Looks like you are already flagging 32 bit files, very cool.

Merely having files in /boot/extra or /boot/packages is not either an error or a warning, as there are very valid use cases for both of those.

 

yada yada yada

Possible.  but down the road, in the distance

 

Fair enough :)

 

Fair enough.  I see what you're doing there (although in my opinion a different type of share should be created to handle a use case such as this, and I would think that LT would have done this rather than jerking around with the vagaries of how mover operates to accomplish this...  poor planning on their part)

 

That being said, there will never be any automatic fixes for any issue.  You'll always have to at least click a button.  The problem with your set up is that if this was say your movies share, then mover would never move the files to the array and the cache would fill up.  Happens a fair amount around here with misconfigured apps and shares.

Sent from my SM-T560NU using Tapatalk

 

Share this post


Link to post

Fair enough.  I see what you're doing there (although in my opinion a different type of share should be created to handle a use case such as this, and I would think that LT would have done this rather than jerking around with the vagaries of how mover operates to accomplish this...  poor planning on their part)

 

That being said, there will never be any automatic fixes for any issue.  You'll always have to at least click a button.  The problem with your set up is that if this was say your movies share, then mover would never move the files to the array and the cache would fill up.  Happens a fair amount around here with misconfigured apps and shares.

I agree this sort of thing should be flagged because most of the time it is a problem, and as long as nothing happens automatically we'll be fine.  Although if there was a way to mark a specific share to be ignored, that would be cool too :)

Share this post


Link to post

- Added checks if docker is using volumes mounted with unassigned devices and not using slave mode (6.2 only)

- Added check for fat32 on flash drive (no clue if the system will get up and running enough to use the plugin if the flash is formatted as exFat to even run this plugin, but the check is here)

- Added check for only supported file systems within the cache / array *

 

* I have seen reports of NTFS being used on a cache drive.  I think the procedure the user uses is to set the system up, install UD, install a previously formatted NTFS cache drive.  You *may* also be able to do this on a new array by installing UD before setting up the parity disk

 

P.S.: if you need to test any error/condition possible to recreate on my test server feel free to ask/pm me.

johnnie, are you able to test this on the cache drive?  Remove any cache drive, install UD so that the ntfs drivers are guaranteed to be there, and set up a cache drive with an NTFS formatted drive.  This works on my simulated disks.ini file, but I've looked around here, and believe it or not I can't find a hard drive that I don't currently have in use somewhere.

Share this post


Link to post

I agree this sort of thing should be flagged because most of the time it is a problem, and as long as nothing happens automatically we'll be fine.  Although if there was a way to mark a specific share to be ignored, that would be cool too :)

What I'll do is switch it over to being a warning, as its not exactly a major error, and the system is still functional with it like that.

Share this post


Link to post

I agree this sort of thing should be flagged because most of the time it is a problem, and as long as nothing happens automatically we'll be fine.  Although if there was a way to mark a specific share to be ignored, that would be cool too :)

What I'll do is switch it over to being a warning, as its not exactly a major error, and the system is still functional with it like that.

Thought even more about it, and what I'm going to do is completely remove the options to have this plugin move the data around for you to/from the cache, and instead point them to dolphin / krusader / mc.  I'll just leave the suggestion to fix the settings however.

 

Makes my job a ton easier, and takes me off the hook if anything unexpected should happen during the move.

Share this post


Link to post

I think this plugin is very nice.  I can see where it would be helpful when there are issues that a user would not even be aware of.

 

Just updated and found a warning about FTP being enabled.  I have found that FTP gets re-enabled on a reboot even if I disable it.  This behavior occurs on 6.1.9 and 6.2.  I've reported this as a defect.  I actually think the FTP should be disabled by default and only enabled if a user enables it.

Share this post


Link to post

I think this plugin is very nice.  I can see where it would be helpful when there are issues that a user would not even be aware of.

 

Just updated and found a warning about FTP being enabled.  I have found that FTP gets re-enabled on a reboot even if I disable it.  This behavior occurs on 6.1.9 and 6.2.  I've reported this as a defect.  I actually think the FTP should be disabled by default and only enabled if a user enables it.

sure... just when I'm about to release it...  I'll look at it...

 

Share this post


Link to post

I think this plugin is very nice.  I can see where it would be helpful when there are issues that a user would not even be aware of.

 

Just updated and found a warning about FTP being enabled.  I have found that FTP gets re-enabled on a reboot even if I disable it.  This behavior occurs on 6.1.9 and 6.2.  I've reported this as a defect.  I actually think the FTP should be disabled by default and only enabled if a user enables it.

sure... just when I'm about to release it...  I'll look at it...

 

I don't think the FTP disabling issue is yours.  It is the on FTP settings page.  If it is disabled and you reboot, it re-enables.  Not related to your plugin.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.