wallywhatever Posted May 25, 2022 Share Posted May 25, 2022 5 hours ago, Squid said: Run the check filesystem against disk 1 Thanks. I was wondering how to deal with that. Appreciate it. Quote Link to comment
huanbua Posted July 7, 2022 Share Posted July 7, 2022 (edited) Same Problem after a Reboot all Shares gone.... For Good Sake i Test this system only but as i see in Testing ... it is verry unstable 2 Different Systems same Problem after Reboot all Shares gone I cant even make a ls- lai /mnt/user on system 1... nginx error while try to connect the shell the other system show me under /mnt/user only a container LXC the Other Stuff is at /mnt/disk1/ appdata domain isos lxc and system in my Case it was the LXC App Edited July 7, 2022 by huanbua Quote Link to comment
Squid Posted July 7, 2022 Share Posted July 7, 2022 6 hours ago, huanbua said: in my Case it was the LXC App Yeah, and I saw the same thing once and had been meaning (and kept forgetting) to bring it to @ich777's attention. 1 Quote Link to comment
ich777 Posted July 7, 2022 Share Posted July 7, 2022 6 hours ago, huanbua said: in my Case it was the LXC App Have you got the Diagnostics when that happened? What path for LXC have you chosen? Quote Link to comment
huanbua Posted July 7, 2022 Share Posted July 7, 2022 1 minute ago, ich777 said: Have you got the Diagnostics when that happened? What path for LXC have you chosen? /mnt/lxc at install it gives a warning dont use /mnt/user so i took /mnt/lxc i purged all apps and restarted after every single one... the lxc was the last one because i dont want to uninstall it ... Quote Link to comment
ich777 Posted July 7, 2022 Share Posted July 7, 2022 7 minutes ago, huanbua said: /mnt/lxc at install it gives a warning dont use /mnt/user so i took /mnt/lxc But this share doesn't exist or am I wrong, or how do you created that share, is this running off of a ZFS share? It was meant that way that you select a path to a physical share (like /mnt/disk1/lxc or /mnt/cache/lxc) and not the FUSE file path like /mnt/user/lxc Quote Link to comment
huanbua Posted July 7, 2022 Share Posted July 7, 2022 (edited) 7 minutes ago, ich777 said: But this share doesn't exist or am I wrong, or how do you created that share, is this running off of a ZFS share? Nope i created a share for it, shares tab create share and no ZFS aktually i have problems with my p440 raid controller and dont wanted to build an zfs without smart funktion. So im fiddeling with it to bring it in IT Mode or HBA. To test a bit i just created 4 Raid 0 Volumes and use the Native xfs but only SMART funktion from ILO ... Edited July 7, 2022 by huanbua Quote Link to comment
ich777 Posted July 7, 2022 Share Posted July 7, 2022 12 minutes ago, huanbua said: Nope i created a share for it, shares tab create share But how did you do that? All user shares are created within /mnt/user/ or better speaking /mnt/DISKNAME/ So if you selected the path /mnt/lxc it got ultimately created in RAM… Quote Link to comment
huanbua Posted July 7, 2022 Share Posted July 7, 2022 13 minutes ago, ich777 said: But how did you do that? All user shares are created within /mnt/user/ or better speaking /mnt/DISKNAME/ its a Device i added to the shares 250gig ram device i have to much ecc ram free so i thought it would be a good idea to run the lxc containers from a ram dev... sry i dont mentioned that Quote Link to comment
ich777 Posted July 7, 2022 Share Posted July 7, 2022 9 minutes ago, huanbua said: its a Device i added to the shares 250gig ram device i have to much ecc ram free so i thought it would be a good idea to run the lxc containers from a ram dev... sry i dont mentioned that How did you do that exactly? It would be way better to create a share that is located on the cache or a single disk and use it in that way like mentioned /mnt/cache/lxc or /mnt/diskX/lxc or whatever. It can also be the case that this is caused because you are created the share at /mnt/ Have you created this RAM disk by hand and mounted it to /mnt/lxc? Quote Link to comment
huanbua Posted July 7, 2022 Share Posted July 7, 2022 38 minutes ago, ich777 said: How did you do that exactly? It would be way better to create a share that is located on the cache or a single disk and use it in that way like mentioned /mnt/cache/lxc or /mnt/diskX/lxc or whatever. It can also be the case that this is caused because you are created the share at /mnt/ Have you created this RAM disk by hand and mounted it to /mnt/lxc? i created it hardware side and unraid foud it as nvme drive so i took this to mount it with add share 1 Quote Link to comment
JonathanM Posted July 7, 2022 Share Posted July 7, 2022 8 minutes ago, huanbua said: i created it hardware side Please explain further. I'm not familiar with setting up a RAM drive in hardware. Quote Link to comment
ich777 Posted July 7, 2022 Share Posted July 7, 2022 32 minutes ago, huanbua said: i created it hardware side and unraid foud it as nvme drive so i took this to mount it with add share I really don't understand what you are saying here, how can you took a NVME drive and add it as a share? Quote Link to comment
huanbua Posted July 7, 2022 Share Posted July 7, 2022 (edited) 30 minutes ago, ich777 said: I really don't understand what you are saying here, how can you took a NVME drive and add it as a share? https://support.hpe.com/hpesc/public/docDisplay?docId=c05086876&docLocale=en_US with this i made an UEFI Shell RAM Disk this disk showed up for me as NVME Drive and i shared it Edited July 7, 2022 by huanbua Quote Link to comment
ich777 Posted July 7, 2022 Share Posted July 7, 2022 48 minutes ago, huanbua said: with this i made an UEFI Shell RAM Disk this disk showed up for me as NVME Drive and i shared it And how did you mount it to /mnt? Anyways, regardles of how did you do all of that, please do it like it is recommended in the support thread and also on the settings page from LXC use something like /mnt/cache/lxc or /mnt/disk2/lxc or you even can mount it via Unassigned Devices and you have to change it something like that /mnt/disks/DISKNAME/lxc I‘ve never had the issue that /mnt/user was missing. Could it be also the case that it‘s probably related to your HBA since you are having issues like you‘ve mentioned above with ZFS too? Quote Link to comment
huanbua Posted July 7, 2022 Share Posted July 7, 2022 28 minutes ago, ich777 said: Could it be also the case that it‘s probably related to your HBA since you are having issues like you‘ve mentioned above with ZFS too? nope, i made a share from a device i think i used unassigned devices for it anyways nope HBA dont work for me at the moment i used the woprk around to take raid 0 volumes. im working on it to get hba working - i use a custom ilo for pmw signaling my fans on the fly so i have to get a workarround to handle the hba mode, atm i have no hba option in my raid config... i need to compile the controller firmware in my custom ilo software to force hba on ... its a bit **** Quote Link to comment
ich777 Posted July 8, 2022 Share Posted July 8, 2022 16 hours ago, huanbua said: nope, i made a share from a device i think i used unassigned devices for it If you are using Unassigend Devices to mount the disk/device that you've created would be in /mnt/disks/DISKNAME/lxc and not /mnt/lxc Quote Link to comment
rmp5s Posted November 12, 2022 Share Posted November 12, 2022 On 1/23/2022 at 5:28 AM, Squid said: Something is weird. What is the output (via the terminal button) of ls -ail /mnt/ and ls -ail /mnt/user Can you also stop the array and to to Global Share Settings and make a change (any change) - Apply it and then revert it Hey, @Squid...I ran these and this was the output. First one looks normal(ish) but the second one? No idea. "Transport endpoint is not connected"? Same thing when I try to change permissions as mentioned previously. I just made a thread about my specific issues...log and some more details can be found here. Quote Link to comment
rmp5s Posted November 13, 2022 Share Posted November 13, 2022 I rebooted and chmod worked. We'll see if that changes anything in the long term...I'm having the same "disappearing shares" issues as others above. Quote Link to comment
Nanuk_ Posted March 19, 2023 Share Posted March 19, 2023 Hello, First time user that's having the same issue. Mine is triggered during a batch copy of a mounted disk to the array using krusader. Part way through the copy the shares will disappear and the copying will stop. This is what happens. Here is the diagnostic. tower-diagnostics-20230319-1159.zip One thing I did notice after the first occurrence is that the server name changed, from what I assigned it; to the default "Tower". After a several reboots they came back, but after starting a copy on krusader they disappear again and the copying stops. Tried the chmod fix after a reboot and I'm hoping this helps. Also switch the USB from the NZXT internal USB slot to one on the mobo. Here's hoping someone has an idea what to do. I'm a first time user and so far I've bricked 4 SSDs (they were in a pool and when I tried to remove them they disappeared and even windows 10 and the bios doesn't see them anymore) trying Unraid. Quote Link to comment
Nanuk_ Posted March 22, 2023 Share Posted March 22, 2023 Sooo... I think I figured out what the issue was with my disappearing shares and it seems to have been related to my HBA card (LSI SAS2308 PCI-Express Fusion-MPT SAS-2). It turns out the Noctua NF-A4x10 I mounted on it a while back stopped working and I believe that the card was over heating. I also replaced the Fractal 140mm fans on the front of the case with Noctua NF-A15 and since I couldn't get the Dynamix Auto Fan Control to detect my mobo fan control I decided to have the mobo bios control the fans manually. This dropped HDD temps from the mid 45s down to 35/38s degrees. The other thing I tried was the following. 1.) I first removed the heatsink, cleaned it, re-applied the thermal paste, this did not fix the udma errors 2.) I replaced the cables, but this did not work. 3.) attached the HDDs directly to the SATA ports on the mobo and this seemed to fix the issue. Either the new cables I got were borked or the HBA needs to be replaced. I did keep some old 3x2tb attached to the HBA card to see if the new cables and the reapplyed thermal past + new fan would fix it. But sadly the "udma crc error count" just kept climbing. Currently over 150k instances. I'm considering getting a new LSI SAS2308 card. Quote Link to comment
kexxt Posted May 17, 2023 Share Posted May 17, 2023 Okay guys, I've been dealing with this issue off and on for a WHILE. I just solved it. For me, the issue was down to UNRAID's extremely low open file limit. ulimit -n would return something like 40k which for my specific use case was NOT enough. Running ulimit -s N (where N is the desired value) actually doesn't help anything because shfs is already running and has the tiny file descriptor limit. I wrote this script which increases the limit for all running shfs processes (parent and child). This makes it so the shares do not drop out when the maximum open files limit is reached. #!/bin/bash # Function to set ulimit -n for child processes function set_ulimit_for_children() { local parent_pid=$1 local limit=$2 # Get the list of child PIDs from pstree output local child_pids=$(pstree -p $parent_pid | grep -oE '[0-9]+') # Iterate over the child PIDs and set ulimit -n for each child process for child_pid in $child_pids; do # Skip the parent PID if [ "$child_pid" != "$parent_pid" ]; then echo "Increasing ulimit -n for PID $child_pid" prlimit --pid $child_pid --nofile=$limit fi done } # Set the desired ulimit -n value limit=1048576 # A million is a lot of open files # Get PIDs for any instances of "/usr/local/sbin/shfs" parent_pids=$(pgrep -f "/usr/local/sbin/shfs") # Iterate over all instances of "/usr/local/sbin/shfs" for parent_pid in $parent_pids; do # Check if the parent process exists if ! kill -0 $parent_pid > /dev/null 2>&1; then echo "No running /usr/local/sbin/shfs process found with PID $parent_pid." exit 1 fi # Set ulimit -n for the parent process echo "Increasing ulimit -n for PID $parent_pid" prlimit --pid $parent_pid --nofile=$limit # Call the function to set ulimit -n for child processes set_ulimit_for_children $parent_pid $limit done Quote Link to comment
dgirard Posted May 14 Share Posted May 14 (edited) I'm hoping this works as this has been biting me for a while...thanks for figuring this out kexxt! One thing, looks like the location of shfs has changed, so the above script from kexxt needs to be slightly modified. The line that includes "/usr/local/sbin/shfs" needs to be changed to "/usr/local/bin/shfs" Just ran the script and we'll see if things get stable. Fingers crossed! Edited May 14 by dgirard Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.