[Plugin] CA Fix Common Problems


Recommended Posts

4 hours ago, Cold Ass Honkey said:

 

@dlandon : Thanks for the  clarification.  Was this folder addition for a specific purpose or just a happy accident?

That folder is for people to add their mounts for things like zfs devices and rclone , rather than putting the mounts in places like /mnt/disks or /mnt/remotes.  FCP was updated to not flag that folder, but appears to need another update.  For now ignore the FCP warning.  Read the UD release notes for more info.  Your system doesn’t have a problem.

  • Upvote 1
Link to comment
2 hours ago, OsoPolar said:

Hello 

I have been running years never had any errors but i see some changes have been made.
The error points me to no specific location  

Invalid folder addons contained within /mnt

I have read thread and added my diagnostics file please can somebody help me fix the issue ?

bearcave-diagnostics-20230304-0911.zip 414.31 kB · 0 downloads

Nothing to fix.  Ignore FCP for now until it gets an update.  UD added that folder.  Read UD release notes for more info.

  • Upvote 1
Link to comment
3 hours ago, OsoPolar said:

Hello 

I have been running years never had any errors but i see some changes have been made.
The error points me to no specific location  

Invalid folder addons contained within /mnt

I have read thread and added my diagnostics file please can somebody help me fix the issue ?

bearcave-diagnostics-20230304-0911.zip 414.31 kB · 0 downloads

 

9 hours ago, dlandon said:

Ignore it.  The UD update added the folder and FCP is reporting it as a problem, but it is not.

 

Link to comment

For some times now already "fix common problems" nags me with an warning:

 

Missing DNS entry for host

Using DNS server 192.168.0.8, Unraid is unable to resolve 'F.XXX.local'. If this url resolves for your client computers using a different DNS server, you can probably ignore this warning. Otherwise, you should set your TLD to 'local' or add a DNS entry for 'F.XXX.local' that points to 192.168.0.4. The local TLD can be adjusted here:

 

It is clear what it means, but also, it is wrong 😞

 

root@F:~# dig F.XXX.local

; <<>> DiG 9.18.9 <<>> F.XXX.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33864
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;F.meiszl.local.                        IN      A

;; ANSWER SECTION:
F.XXX.local.         0       IN      CNAME   f.XXX.de.
f.XXX.de.            242     IN      A       192.168.0.4

;; Query time: 1 msec
;; SERVER: 192.168.0.8#53(192.168.0.8) (UDP)
;; WHEN: Sun Mar 05 07:10:59 CET 2023
;; MSG SIZE  rcvd: 84

 

As you can see, the entry exists, but it is a CNAME. I guess, "fix common problems" does not do a recursive search and does not expect to get back a CNAME ?

 

Link to comment

I woke up this morning to my server seized up with no dockers operating correctly, and many having come back up after my regular backup window without IP addresses.  After a few hard reboots, I was able to get the server running again, and dockers are now coming back online, however none of my reverse poxied addresses are working, getting "Web Server is down" on all of them.  Nginx looks correct from what I can tell, and Cloudflare isn't reporting any errors right now.  I'm working through this issue, but I'm more concerned with whether or not something else is happening on the server outside of this reverse proxy issue.

 

I did see a series of btrfs errors, and have had issues with my cache drive recently.  I have formatted cache, but I'm curious if it needs to be replaced entirely?

trescommas-diagnostics-20230306-0830.zip

Edited by jackfalveyiv
Link to comment

Invalid file (or folder) contained within /mnt

 

Hello,

 

I am getting this warning from Fix Common problems.

 

I think initially this came about because I was using /mnt/disks/Dropbox for Rclone, but switched to /mnt/addons/Dropbox based on the recommendation.

 

What I cannot understand is that after moving this mount point, I still get this error. The /mnt/disks directory is also now empty.

 

I have attached diagnostics. Thank you for this and the other great tools.

tower-diagnostics-20230309-1458.zip

Link to comment

@dlandon?

 

tmpfs           1.0M     0  1.0M   0% /mnt/addons
tmpfs           1.0M     0  1.0M   0% /mnt/rootshare
/dev/md1         19T   18T  324G  99% /mnt/disk1
/dev/md2        3.7T  3.7T   29G 100% /mnt/disk2
/dev/md3        9.1T  377G  8.8T   5% /mnt/disk3
/dev/md4        9.1T   65G  9.1T   1% /mnt/disk4
/dev/md5        9.1T   65G  9.1T   1% /mnt/disk5
/dev/md6        9.1T   65G  9.1T   1% /mnt/disk6
/dev/sdb1       2.1T   58G  1.8T   4% /mnt/cache
shfs             59T   23T   37T  38% /mnt/user0
shfs             59T   23T   37T  38% /mnt/user
/dev/loop2       20G   13G  6.9G  64% /var/lib/docker
/dev/loop3      1.0G  4.5M  905M   1% /etc/libvirt
Dropbox:         12G  3.1G  8.5G  27% /mnt/addons/Dropbox

/mnt/addons is in there twice??

Link to comment
1 hour ago, Squid said:

@dlandon?

 

tmpfs           1.0M     0  1.0M   0% /mnt/addons
tmpfs           1.0M     0  1.0M   0% /mnt/rootshare
/dev/md1         19T   18T  324G  99% /mnt/disk1
/dev/md2        3.7T  3.7T   29G 100% /mnt/disk2
/dev/md3        9.1T  377G  8.8T   5% /mnt/disk3
/dev/md4        9.1T   65G  9.1T   1% /mnt/disk4
/dev/md5        9.1T   65G  9.1T   1% /mnt/disk5
/dev/md6        9.1T   65G  9.1T   1% /mnt/disk6
/dev/sdb1       2.1T   58G  1.8T   4% /mnt/cache
shfs             59T   23T   37T  38% /mnt/user0
shfs             59T   23T   37T  38% /mnt/user
/dev/loop2       20G   13G  6.9G  64% /var/lib/docker
/dev/loop3      1.0G  4.5M  905M   1% /etc/libvirt
Dropbox:         12G  3.1G  8.5G  27% /mnt/addons/Dropbox

/mnt/addons is in there twice??

It's two separate mounts.  The /mnt/addons mount is the 1MB protection mount.  The /mnt/addons/Dropbox is the user's mount.

 

Note df when I have a disk mounted:

Filesystem      1K-blocks       Used  Available Use% Mounted on
rootfs           16259948     298856   15961092   2% /
tmpfs               32768        456      32312   2% /run
/dev/sda1        62399448     930936   61468512   2% /boot
overlay          16259948     298856   15961092   2% /lib
overlay          16259948     298856   15961092   2% /usr
devtmpfs             8192          0       8192   0% /dev
tmpfs            16273892          0   16273892   0% /dev/shm
tmpfs              131072        700     130372   1% /var/log
tmpfs                1024          0       1024   0% /mnt/disks
tmpfs                1024          0       1024   0% /mnt/remotes
tmpfs                1024          0       1024   0% /mnt/addons
tmpfs                1024          0       1024   0% /mnt/rootshare
/dev/md1p1     1952560688 1495015100  457545588  77% /mnt/disk1
/dev/md2p1     1952560688 1361615884  590944804  70% /mnt/disk2
/dev/sdf1       499862392    7818812  492043580   2% /mnt/cache
/dev/sdj1       249935976   39671448  210264528  16% /mnt/sysdata
shfs           3905121376 2856630984 1048490392  74% /mnt/user0
shfs           3905121376 2856630984 1048490392  74% /mnt/user
/dev/loop2       20971520       4772   20429372   1% /var/lib/docker
/dev/loop3        1048576       5956     924380   1% /etc/libvirt
MusicBackup    1885863040        128 1885862912   1% /mnt/disks/Music_Backup

The /mnt/disks and /mnt/disks/Music_Backup are both there.

 

FCP is reporting an invalid path?

Mar  9 12:53:21 Tower root: Fix Common Problems: Error: Invalid folder path contained within /mnt

Is that /mnt/addons/Dropbox?

Link to comment

Good Morning Everyone, I have something I am thinking misconfigured in BIOS and it is causing my log to fill up in a day. Could someone help me out so I can resolve this?

 

Mar 13 22:39:20 LAB-Unraid-01 kernel: ACPI Error: Aborting method \_SB.PC00.PEG1.PEGP._DSM due to previous error (AE_ALREADY_EXISTS) (20220331/psparse-529)
Mar 13 22:39:20 LAB-Unraid-01 kernel: ACPI BIOS Error (bug): Failure creating named object [\_SB.PC00.PEG1.PEGP._DSM.USRG], AE_ALREADY_EXISTS (20220331/dsfield-184)
Mar 13 22:39:20 LAB-Unraid-01 kernel: ACPI Error: AE_ALREADY_EXISTS, CreateBufferField failure (20220331/dswload2-477)

 

This is the error, I am attaching the full diagnostic as well

lab-unraid-01-diagnostics-20230314-0724.zip

Link to comment

Suggestion for an improvement:

 

Please see this support issue I raised:

 

Fix Common Problems Reporting Server Out Of Memory Errors

 

Basically if a container that has restricted RAM runs out of that RAM UNRAID will "sacrifice the child" and kill the container process, this gets reported to the syslog. Fix Common Problems reports this as if the server has run out of memory. It would be nicer to report that "container X has run out of memory...".

Link to comment
  • 2 weeks later...
1 hour ago, frodr said:

The old cache is removed

And thats the problem.  Odds on you have a docker app (or the default appdata share in settings - docker) directly referencing /mnt/cache in one of the paths.  They need to be looked at and updated appropriately

  • Thanks 1
Link to comment
1 hour ago, Squid said:

And thats the problem.  Odds on you have a docker app (or the default appdata share in settings - docker) directly referencing /mnt/cache in one of the paths.  They need to be looked at and updated appropriately

 

No dockers are having a path to the old cache pool. Their path are either /mnt or /mnt/user. But in path settings the old cache "cache", is still in the list after removing the pool.

 

Screenshot 2023-04-04 at 16.56.00.png

Edited by frodr
Spelling
Link to comment
2 hours ago, frodr said:

 

No dockers are having a path to the old cache pool. Their path are either /mnt or /mnt/user. But in path settings the old cache "cache", is still in the list after removing the pool.

 

Screenshot 2023-04-04 at 16.56.00.png

Have you rebooted since changing the pools around?   If not then that could mean that the /mnt/cache folder is left over from before you had things reconfigured.    If it re-appears after a reboot then there is still something set with that as the start of the configured path.

  • Thanks 1
Link to comment
55 minutes ago, itimpi said:

Have you rebooted since changing the pools around?   If not then that could mean that the /mnt/cache folder is left over from before you had things reconfigured.    If it re-appears after a reboot then there is still something set with that as the start of the configured path.

 

Yes, I have rebooted. After doing it once more, the error message was gone. I also removed DirsyncPro docker. This docker app folder reappeared in the old Cache Pool (when it was there, but not removed) after I moved the app folder to a new Pool. 

 

SOLVED!

Link to comment

I'm probably missing something obvious... but not sure how to fix. I'm getting the "Share x set to cache-only, but files / folders exist on the array" message. And indeed, there are files for that share on the array.

 

So, following instructions for once, I set share x to 'prefer' cache, disabled docker and vm... and invoked the mover. That starts to do its magic, after which there is more data on the cache drive. Hooray.

 

However, I can still see the data on the array as well. Was expecting that to have been moved/merged to cache. And sure enough, when I set the share back to cache only, the message pops up again.

 

Not feeling comfortable just moving this manually and deleting from array, since that might overwrite new(er) versions. What's the safe move here? Think it's a bit overkill, but including the diagnostics anyway :) thanks in advance for the help!

kaasnas-diagnostics-20230408-1953.zip

Link to comment

You have a bunch of shares like that

 

appdata                           shareUseCache="only"    # Share exists on cache, disk1
d------d                          shareUseCache="only"    # Share exists on cache, disk1
t-------e                         shareUseCache="only"    # Share exists on cache, disk1

 

And the same steps need to be done for all of them.  Set them to "Prefer" and run mover.

 

Should be noted that mover cannot move any opened files so that's why Docker and VMs need to be disabled in Settings for it to move those related files.

 

If you're still having issues, then post the output of 

ls -ail /mnt/disk1/appdata

 

Link to comment
1 hour ago, Kaastosti said:

However, I can still see the data on the array as well. Was expecting that to have been moved/merged to cache. And sure enough, when I set the share back to cache only, the message pops up again.

If a file already exists on the cache it will not be moved.   Have you checked for duplicate filenames?  If there are any then you will have to manually sort out which is the copy you wish to keep.

Link to comment

Right now I was focusing on the appdata. I put it back on cache only to see whether the message was still there. Which it was. Now put it back on 'prefer', disabled docker, disabled VM, invoked mover. Done instantly.

 

When I check /mnt/user/appdata, there are even folders there for dockers that no longer exist.

When I check /mnt/disk1/appdata, I only see folders for binhex-krusader and pihole there.

 

Did some comparison, both were safe to remove from the array. Done so... set shares back to cache only, re-run FP, no issues left :) Thanks for the help!

Edited by Kaastosti
Link to comment

Hi,

I have the fcp alert coming up for having a directory called user defined in the user directory (have had it for ages and haven't figured out what to do).

So i did some homework. The only files in that directory are related to the live libvirt.img file that unraid VM uses (i can see it as part of the system messages when libvirt comes up - Apr 10 13:04:00 Tower emhttpd: shcmd (306860): /usr/local/sbin/mount_image '/mnt/user/user/system/libvert/libvirt.img' /etc/libvirt ).

If i was going to fix it i suspect i need to tell libvirt to put the file somewhere else - i am not sure how to do that? Its an old unraid 6.9.2.

All vm's are deleted so i don't have any vm data to lose.

 

 

image.png.bf7064a66e443cd63119caf997f607a3.png

 

Regards Dave

tower-diagnostics-20230410-1310.zip

 

Hi - i fixed it by looking for the extended parms in the vm's manager tab. Thanks!

Edited by daver898
i fixed it
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.