-
Posts
4,469 -
Joined
-
Last visited
-
Days Won
32
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by ljm42
-
-
On 7/29/2023 at 7:07 PM, Alyred said:
Quick question: Any reason that FCP still tells me that I have multiple NICs on the same IP network? One of the cards is passed through to a VM. While I've got that warning set to "ignored", it's a valid configuration.
Please upload your diagnostics.zip (from Tools -> Diagnostics). Any NIC passed through to a VM would not be visible to FCP, so there must be something else triggering the notification.
-
27 minutes ago, Jaybau said:
I ran into the problem with v6.12.3.
Something is holding your zpool open, but it doesn't appear to be the docker.img:
Jul 27 15:15:42 Tower emhttpd: Unmounting disks... Jul 27 15:15:42 Tower emhttpd: shcmd (154432): /usr/sbin/zpool export cache Jul 27 15:15:42 Tower root: cannot unmount '/mnt/cache/system': pool or dataset is busy
I'd suggest starting your own thread, I don't think you are hitting the issue this thread is about, and there are other things that are more concerning:
Jul 26 20:06:09 Tower kernel: critical medium error, dev sdh, sector 3317704632 op 0x0:(READ) flags 0x0 phys_seg 72 prio class 2 Jul 26 20:06:09 Tower kernel: md: disk0 read error, sector=3317704568
I'm not an expert on that but hopefully someone else can lend a hand.
-
-
The snippet you posted looks fine
-
This log is currently limited to 10MB, so it isn't going to fill your log partition on its own. In the next release we are going to cap it at 2MB IIRC
-
In the short term, please disable IPv6 if you can. We are working on some fixes and will ask you to install a test release when it is ready.
Aside from that, please install the Fix Common Problems plugin and see if it has any suggestions for you.
-
6 hours ago, sphbecker said:
I upgraded to 6.12.1
If you are still on 6.12.1, please upgrade to 6.12.3 and provide diagnostics ( << click the link ), preferably after the problem happens
6 hours ago, sphbecker said:is there an SSH command to restart the web UI service without a full reboot?
Depending on the problem, these may help:
/etc/rc.d/rc.php-fpm restart /etc/rc.d/rc.nginx reload
-
On 7/23/2023 at 11:33 AM, spykid said:
My web GUI fails to load after the update 6.12.3, tried different solutions on the forum to no success. Could someone please check the attached diagnostics and see anything wrong?
What version are you comfing from?
Your SSL port is set to 80443, but the max is 65535
Carefully edit the config/ident.cfg file on the flash drive and change this line:
PORTSSL="80443"
to something else, like:
PORTSSL="40443"
(or any other number between 1000 and 65k that isn't in use by a Docker container)
Then reboot the system. Based on your settings, the webgui should be available at these urls:
http://192.168.1.75:8080- 1
- 1
-
21 hours ago, unraider2334 said:
I hope this is something minor and easily fixed.
Your symptoms are not the same. Please start a new thread with your specific issues and include your diagnostics.zip ( <<< click the link )
-
1 hour ago, wirenut said:
I am one who has had an unclean shutdown after upgrading.
I followed the instructions for the upgrade to 6.12.3. I did have to use the command line instruction, upgrade went fine and server was up for 6 days, this morning i needed to reboot server.
I spun up all discs
I individually stopped all my dockers and my VM.
I hit reboot button.
it is about 4 hours into unclean shutdown parity check with no errors, log repeated this while shutting down:
Jul 22 08:34:04 Tower root: umount: /mnt/cache: target is busy.
Jul 22 08:34:04 Tower emhttpd: shcmd (5468228): exit status: 32
Jul 22 08:34:04 Tower emhttpd: Retry unmounting disk share(s)...
Jul 22 08:34:09 Tower emhttpd: Unmounting disks...
Jul 22 08:34:09 Tower emhttpd: shcmd (5468229): umount /mnt/cache
Jul 22 08:34:09 Tower root: umount: /mnt/cache: target is busy.
Jul 22 08:34:09 Tower emhttpd: shcmd (5468229): exit status: 32
Jul 22 08:34:09 Tower emhttpd: Retry unmounting disk share(s)...This is not related to the recent bug fix. Most likely, you had a SSH or a web terminal open and cd'd to the cache drive, like this:
root@Tower:/mnt/cache/appdata#
If desired, you can install the Tips and Tweaks plugin, by default it will automatically kill any ssh or bash process when you stop the array.
There are other potential causes too, see https://forums.unraid.net/topic/69868-dealing-with-unclean-shutdowns/
If your normal workflow regularly gives unclean shutdowns, I'd get in the habit of stopping the array first before restarting. That will give you a chance to see the "Retry unmounting disks" message in the lower left corner and start investigating.
-
Very odd. See this for running diagnostics from the command line:
https://docs.unraid.net/unraid-os/manual/troubleshooting/#system-diagnostics
-
Sounds like we're talking about the Remote Access feature? That is just for the webgui. To access to Docker containers or other items on your network you'll want to use a VPN https://forums.unraid.net/topic/84226-wireguard-quickstart/
-
1 hour ago, Frank1940 said:
I am another one who had a unclean shutdown when I did the upgrade. Let me provide some additional information as I did not do things in the standard way. Basically, I did the first part of the reboot manually which I think provides some insight in to what happened in my situation
When I did the upgrade from 6.12.2 to 6.12.3 on my Media Server, I attempted to manually stop the array before I did the reboot. The disks in the array all went to the "Unassigned" status but the the array stopping 'wheel' continued to run. I looked at the log and there was a series of messages concurring about 20-30 seconds that the cache drive could not be 'unmounted'.
It won't help you now, but this was discussed in the first post in this thread, bullet 3 in particular. Now that you are on 6.12.3 this cause of unclean shutdowns won't happen.
-
The cache drive concept exists because writes to the Unraid array are comparatively slow. When writing new files they can go to an SSD cache drive and then get copied over to the main array of spinning drives overnight when nobody cares how long it takes.
I'm curious what benefit you see of putting an M.2 cache drive in front of a ZFS pool of NVME drives. Direct writes to the ZFS pool are going to be very fast, putting a cache drive in front of that doesn't get you anything that I can see.
-
2 hours ago, denzo said:
Error (re)installing, tried to update to latest but it failed and in my haste I uninstalled and tried to reinstall but get the following error:
plugin: downloading: fix.common.problems-2023.07.16-x86_64-1.txz ... plugin: fix.common.problems-2023.07.16-x86_64-1.txz download failure: Generic error Executing hook script: post_plugin_checks
How to fix?Your flash drive has errors, looks like that is preventing the file from being written to it:
Jul 19 13:01:31 NAS kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 400e4138) Jul 19 13:01:31 NAS kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 400e4138) Jul 19 13:01:31 NAS kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 400e4138) Jul 19 13:02:31 NAS kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 200e0e43) Jul 19 13:02:31 NAS kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 200e0e43) Jul 19 13:02:31 NAS kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 200e0e43) Jul 19 13:02:31 NAS kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 400e4138)
I'd recommend putting the drive in a Windows computer and running chkdsk, It may work after that. You should also make a backup and get a new flash drive on order ASAP in case this fails completely, in which case see https://docs.unraid.net/unraid-os/manual/changing-the-flash-device
- 1
-
1 hour ago, accelaptd said:
How do I actually get rid of these? They're not on my plugins list.
I would expect them to be listed on your Plugins tab. If not, please upload your diagnostics.zip (from Tools -> Diagnostics)
Note that is really not advised to run rc releases so long. You should upgrade to 6.11.5 or 6.12.3
-
We will definitely announce when this feature is available
-
I really appreciate people posting about their experiences. But without diagnostics we have no way to begin to investigate.
Note: we don't really need diagnostics about normal macvlan call traces, but if you get call traces with the solution discussed in the first post of this thread, we definitely need to see diagnostics.
- 2
-
16 hours ago, DevanteWeary said:
Setting up Wireguard for the first time so want to make sure I'm doing it right.
Be sure to follow the first post here very closely:
https://forums.unraid.net/topic/84316-wireguard-vpn-tunneled-access-to-a-commercial-vpn-provider/
-
On 7/18/2023 at 5:13 AM, shredswithpiks said:
I just updated and the array went straight into a parity check
It won't help you now, but this was discussed in the first post in this thread, bullet 3 in particular. Now that you are on 6.12.3 this cause of unclean shutdowns won't happen.
-
14 minutes ago, futureshocked said:
I had tried with and without IPv6, but it made no difference.
I tried again as per your suggestion, rebooted, and tested with a plugin install, but again no luck (see screenshot attached).
I have attached a new diagnostics file.
So the system must be on the network since you can access its webgui. Since it can't connect to the Internet that sounds like either a bad gateway or DNS server.
Go to Settings -> Network Settings and see if statically setting the DNS server to 8.8.8.8 helps.
- 1
-
On 7/6/2023 at 3:21 AM, SH4LT1S said:
What do I do if my DNS is leaking?
Follow the whole guide in the OP, including the "Testing the tunnel" part
-
On 7/9/2023 at 10:52 AM, bluecat said:
Thanks for this guide. It worked fine in the beginning for me.
Now I'm experiencing the problem, that I can't choose the wg0 interface anymore when creating a Docker. This happened after I switched the Docker data root setting from btrfs vDisk to folder. I tried switching it back and wg0 appears again.
I don't understand why this is happening and where the correlation is here
Odd. Does it help to make a dummy change to the WG config and apply?
If not, Diagnostics might be helpful -
4 hours ago, isvein said:
So the DNS should always be set as extra parameter on each docker and NOT under the tunnel dns settings?
The "Peer DNS server" setting isn't really applicable when in "VPN tunneled access" mode because Peer settings apply to Peers, not Unraid itself.
Best to follow the guide in the OP
- 1
[6.12.2] Array stop stuck on "Retry unmounting disk share(s)"
in General Support
Posted
This sounds like something else is going on, I'd recommend starting a new thread with details of your issue.