[Plugin] CA Fix Common Problems


Recommended Posts

Here's an idea for another check this wonderful plugin could make, though I thoroughly admit it is an edge case:

 

Check /mnt permissions to ensure the mode is 0755.

 

I suggest this because a sloppy boot-time script on my system borked permissions on /mnt, and the result was that all my User Shares (both via SMB and the GUI) vanished until I found and fixed it.

 

Again--absolutely an edge case, but thought it might save someone a couple hours at some point :)

Link to comment
On 6/14/2021 at 7:15 PM, wblondel said:

Hello everyone,

 

I ran an "Extended test", but it seems to be stuck. One of my shares has 1m documents (for a total of 1.3TB), and it has been scanning it for more than an hour.

 

CPU usage of the extendedTest.php script is 20%, and for the shfs process it's 100%. Weirdly, there is no disk activity. Even iotop shows 0 everywhere.

 

I think reorgonazing my shares so that they contain less files would solve the issue, but I still wanted to ask if that's expected behaviour. Maybe it will finish in an hour or two. I'll update this post if so.

 

Still stuck after almost a week, 100% CPU during all this time :/

 

EDIT: After few more hours the test finally ended!

Edited by wblondel
Link to comment
  • 1 month later...

I am getting these 2 errors:

  1. Multiple NICs on the same IPv4 network

    1. This is false, because I used to have 2 NICs installed, but now only 1. I do not have "eth1" inside my /config/network.cgf and I have no way of removing or seeing it inside Network Settings. Any tips?

  2. Invalid folder ram contained within /mnt

    1. This is my ram-disk placed under /mnt/ram, I could ignore this, but is there another way such that this plugin allows it?

tower-diagnostics-20210720-1929.zip

Link to comment
12 hours ago, mrbusiness said:

Multiple NICs on the same IPv4 network

  1. This is false, because I used to have 2 NICs installed, but now only 1. I do not have "eth1" inside my /config/network.cgf and I have no way of removing or seeing it inside Network Settings. Any tips?

 

Edit the config/network.cfg file on your flash drive and remove the items with [1] in the name, those are somehow leftover from your other nic. Also change SYSNICS to "1". Then reboot and re-run FCP

 

# Generated settings:
IFNAME[0]="eth0"
PROTOCOL[0]="ipv4"
USE_DHCP[0]="yes"
DHCP_KEEPRESOLV="no"
USE_DHCP6[0]="yes"
DHCP6_KEEPRESOLV="no"
IPADDR[1]="192.168.1.101"   <---
NETMASK[1]="255.255.255.0"  <---
GATEWAY[1]="192.168.1.1"    <---
SYSNICS="2"                 <---

 

 

12 hours ago, mrbusiness said:

Invalid folder ram contained within /mnt

  1. This is my ram-disk placed under /mnt/ram, I could ignore this, but is there another way such that this plugin allows it?

 

I would just ignore it. FCP flags anything non-standard, and none of the standard places would make sense for what you are doing.

Link to comment

Is there a way to disable Fix Common Problems from checking for updates that are already queue'd to be installed? I keep getting notifications of "x docker app has an update available for it" and want to minimize those discord alerts since it will get auto-updated at 5am.

 

Is there anywhere else to tweak exact notifications that Fix Common Problems sends out?

Link to comment
  • 2 weeks later...

Learning more details of Unraid as a newcomer, I start using the plug-in ca-fix-common-problems to optimize my installation:

 

After having run it for the first time I notice this warning:

 

-CPU possibly will not throttle down frequency at idle         and

-Your CPU is running constantly at 100% and will not throttle down when its idle (to save heat / power). This is because there is currently no CPU Scaling Driver Installed. Seek assistance on the unRaid forums with this issue

 

I am using an old Supermicro tower with Dual Xeon E 5405 Harpertown @ 2Ghz

The mainboard is the X7DVL

I guess the CPU is that old that there is no way to optimize and make the warning disappear ?

 

What is that CPU scaling Driver - do I need to get it from Intel or Supermicro ?

Or just keep it as it is and wait until there is budget for new hardware (could take 2 years - servers are so expensive here...)

 

Thanks for info.

Link to comment
  • 2 weeks later...

@Squid

Proposal for a new check:

 

Compare both sizes and if they do not differ, show a warning:

du -h /mnt/user/domains/*/*.img
du -h --apparent-size /mnt/user/domains/*/*.img

 

Note: Maybe the paths should be obtained through the created VMs as some people don't use the default domains share for their vdisks.

 

One user had the problem that his cache was fully utilized after copying vdisk.img from his backup through Krusader:

https://forums.unraid.net/topic/112448-größe-für-ein-vm-image/?tab=comments#comment-1023910

 

At the moment it looks like Krusader does not support Sparse files. It can be fixed (of course VMs should be stopped) as follows:

find /mnt/cache/domains/*/*.img | while read img; do fallocate -d "$img"; done;

 

 

Link to comment
2 hours ago, mgutt said:

Proposal for a new check:

I have mixed feelings about this. If the user doesn't understand the basics of sparse files, they shouldn't be taking advantage of the over allocation ability. 

 

Unless you are aware and pro-actively managing free space, you should always assume your VM will take the entire space you give it at some point in the future.

 

2 hours ago, mgutt said:

Compare both sizes and if they do not differ, show a warning:

I would argue that the user should be warned if the size DOES differ and full allocation would result in an out of space condition. We regularly see users having issues with paused VM's when the volume runs out of space.

Link to comment
On 8/2/2021 at 11:26 AM, ullibelgie said:

After having run it for the first time I notice this warning:

That test isn't actually definitive, hence why it's not a warning but rather an "other"

 

If the output of this command

cat /proc/cpuinfo | grep MHz

changes on occasion when running, the cpu will throttle down

Link to comment
38 minutes ago, Squid said:

Which really should be handled within the GUI, although it's still an estimate and can't be enforced

Yeah, I don't think FCP is the proper place either, the only thing FCP should warn about is if the free space on any writeable volume is below some arbitrary very small figure like 1MB.

 

The GUI should probably have an indication (not warning) that the available free space on any given volume is below the minimum defined, so people know

a. That volume will not be chosen for writes for shares that include it.

b. Manual writes directed to that specific volume could very well result in an out of space condition.

Link to comment

so i decided to check my sisters Unraid.. its a 4 Bay unit..   i ran the Extended Scan..  been running since yesterday afternoon.. its stuck    it hasnt done anything... and now i see the drives are spun down...   does the diagnostics say anything?

maybe whats caused it to just be stuck?  i looked in the system log  but i cant really tell

 

mitch1.PNG

mitch2.PNG

mitchsserver-diagnostics-20210816-0818.zip

Link to comment

and tell me what a Call Trace error is as it says nothing what it is whats its calling etc... and what a freezing of tasks failed.. what is it what it does.. why is it freezing etc

 

i have a 8TB drive preclearing.. so i hoping it solves the FCP plugin from freezing  

but i guess could be computer or power supply problem..  hoping not yet.. but will see

 

but hopefully 8TB drive will fix something.. its newer lol

 

Edited by comet424
Link to comment
  • 3 months later...

Great plugin, I'd like to suggest a critical performance related config check that could potentially save other users a lot of time and frustration.

 

The script should check to see if docker.img is stored on the array (spinning hdd) instead of the cache (ssd). I suffered several days of painfully slow Docker performance until realizing it was due to docker.img being on a spinning disk, even though I had configured the "system" share to prefer the cache.

 

Mover skipped the file since Docker daemon was running at the time. I had to shutdown docker service and manually invoke mover, now my docker performance is lightning fast as expected!

Nov 29 03:27:31 darktower move: skip: /mnt/disk1/system/docker/docker.img
Nov 29 03:40:01 darktower move: skip: /mnt/disk1/system/docker/docker.img

 

Thanks!

  • Thanks 1
Link to comment
  • 3 weeks later...

I always seem to have problems with the "Apply Fix" option for when a Template URL differs, its says the fix is applied and looking at the file it mentions, I can see its not using the new URL, and so FCP still says its an issue.

 

For example, at the moment, I get this in FCP;

 

Template URL for docker application pihole-template is not the as what the template author specified.



The template URL the author specified is https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole.xml. The template can be updated automatically with the correct URL.

 

Running the "Apply Fix" option, this is the output.

 

 

Template to fix:

/boot/config/plugins/dockerMan/templates-user/my-pihole-template.xml

New template URL:

https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole.xml

Loading template...

Replacing template URL...

Saving template...

Fix applied successfully!

 

Running a Rescan in FCP, it shows the same issue, but a slightly different template now (its has v2 in the URL);

Template URL for docker application pihole-template is not the as what the template author specified.



The template URL the author specified is https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole-v2.xml. The template can be updated automatically with the correct URL. 

 

Again, Running the "Apply Fix" option, shows;

Template to fix:

/boot/config/plugins/dockerMan/templates-user/my-pihole-template.xml

New template URL:

https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole-v2.xml

Loading template...

Replacing template URL...

Saving template...

Fix applied successfully!

 

Looking at the file the "Apply Fix" mentions, its shows the correct URL for this second message;

 

root@Tower [Fri Dec 24 09:32]: ~# grep TemplateURL /boot/config/plugins/dockerMan/templates-user/my-pihole-template.xml
<TemplateURL>https://raw.githubusercontent.com/spants/unraidtemplates/master/Spants/pihole-v2.xml</TemplateURL>

 

I feel like its stuck in a bit of a loop, not sure if its FCP or the CA App/Template, but have had this issue a few times in the past and end up having to ignore it.

Edited by timethrow
Formatting
Link to comment

FixCommonProblems tells me I have a share named "user". I checked, and deleted the directory named "user" in /mnt/user where all the share-directories are located. That directory "user" was created by a docker application at one point. But after deletion the app still tells me I have a share named "user". I checked the directory tree and found: in /mnt there are directories for all the disks (disk 1 to 6), then the following directories:  disks, remotes, user, and user0.

 

The disk1..6 directories point to each of my 6 data disks, the directories "disks" and "remotes" are empty, and both user and user0 point to the directory trees that contains the data from all the shares. Directories "user" and "user0" point to exactly the same stuff.

 

I have a docker application that references:

/downloads      /mnt/user/
/config            /mnt/user/appdata/rutorrent

 

I tried to change the references from "user" to "user0" but I still get the same error message about having a share named "user". Also that the bittorrent docker app is deprecated. I am going to uninstall it, but still there is the problem of the "user" share.

 

Should I rename the directory "user" in /mnt? Or should I create a new directory in /mnt, map it to the same content as user and user0, and then use it for apps?

 

Thank you so much for reading this.

user_user0_dirlist.png

Edited by Juniper
Link to comment

I now have uninstalled the deprecated rutorrent docker app and deleted the directories it had created under /mnt/user. After rebooting the server FixCommonProblems no longer complained about the "user" share.

 

But the question remains:  When I install a different bittorrent docker app, I will also have to reference the directory where the shares are located in, and that is /mnt/user. Will I then get the same problem again with having a "user" share?

 

Should I rather make a new directory in /mnt which then points to the same where "user" and "user0" point to? If yes, how would I do that?

 

Thank you so much.

Link to comment
1 hour ago, Juniper said:

Should I rather make a new directory in /mnt which then points to the same where "user" and "user0" point to? If yes, how would I do that?

No. If you created /mnt/test then the test directory would only exist in ram and would be lost on reboot.

 

I’m not positive but I believe that FCP was complaining that you had created a share named user /mnt/user/user. I’m sure @Squid could probably explain that error. Just set your paths to point to say a share named “torrents” (/mnt/user/torrents). That will work just fine.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.