user20C

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

user20C's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I updated to the new latest image and I am still getting an error. The new error is 2022-05-13 16:58:04.060 Error App: ProcessRun 'StreamTranscode 1c46dc': Error starting Ffmpeg. WorkingFolder: null Anyone have any ideas?
  2. It was my windows 10 machine trying to communicate with unraid over SMB. I don't know what actually fixed my problem. I updated my backup software which was really out of date, updated windows, and then followed spaceinvaderone's youtube video on optimizing windows settings. 1 of those 3 things fixed my issue and the server has been doing great ever since.
  3. I am getting the same error. I switched back to V2 and the error goes away.
  4. I was having the same problem. I fixed it by REPLACING the cipher in the new opvn file with the AES-256-GCM. In a newbie at this stuff so I was confused at first if I just needed to add a line or replace the existing line. It also appears to be case sensitive so make sure the cipher is in all caps. Also, making the changes in notepad on my windows machine would not work. I had to use Atom to make the changes. Hope this helps.
  5. I thought I would provide an update. I have narrowed down the source of what is triggering the shfs processes to consume all my memory. Today a read a thread on the forums here explaining how shfs has to do with reading files in the different shares. I decided to start turning off everything that would be accessing the shares to see if it would make a difference in shfs consuming the memory. Up until this point I had only turned off docker and plugins which had no effect. I went into the unraid settings and turned off all the network services. (AFP, NFS, SMB, FTP, and Wireguard VPN). Reboot and boom, no shfs processes consuming memory. I let it run like that for 30 minutes and restarted the server again and let it run again for 30 minutes just to make sure. Then one by one, i turned those services on again. It turns out SMB is my problem. So I then turned on SMB but set all the options under SMB to no. Everything is working fine now for a few hours and I can still browse my mapped network drives from my PC. Tomorrow when I have time I will turn each of those settings (WDS, NetBIOS, and Enhanced macOS interoperability) back on to see which one causes my issues. I have always had issues viewing my unraid shares on my PC and I know there is a great video out there from spaceinvaderone showing how to fix it I just havent taken the time to do it. I just login with "NOBODY" each time it asks. Im hoping my problem has something to do with that and SMB and the WSD setting or something. So keeps your fingers crossed that I have this figured out or at least know where to start trouble shooting.
  6. Mine use to do this, Would go for months solid as a rock. I understand that, and to me that makes me think I have something hardware related not playing nice with unraid. I haven't made and changes to the hardware but I wonder if something is failing. My first thought was memory, but there are no errors in the log. I tried taking out half the memory and restarting. Same behavior, shfs starting eating memory. Then I took that half out and replaced with the half I pulled before. Again, same behavior. In my mind that rules out a single stick of memory being the problem. Of course I could have multiple sticks failing so its not a perfect test. My other thought is if shfs has to do with my shares and disks, maybe it is something in the HBA card. I have a LSI 9211-4I. Could it possibly be firmware corruption? Im know I'm throwing darts here but just brainstorming. Last question, is there any way to see what is being written to memory? Also, this week I think I may try the following just to see what happens, again throwing more darts but I'm working from home and have the time so why not. 1. swap my windows PC hardware into the server, reuse HBA, backplane, and all disks. 2. Order a new USB drive, maybe mine is on the way out so when I start the server something corrupted gets loaded? Is that even possible? I may also start researching a new server build. I keep reading about these Ryzen builds and how great they are. My current dual E5-2670 server can handle everything I use it for easily but its loud, power hungry, and puts out a lot of heat. Thanks for any help and/or feedback! P.S. Parity check is at 65% and ram utilization is only at 56%. Maybe the parity check will finish before shfs crashes!
  7. Still doing daily restarts, my monthly parity check just started but I will probably have to cancel it because it will not complete before SHFS uses all available ram. I'm just thankful I bought the server with 192GB of ram, if I had something reasonable like 32 or 64 GB I would have to restart 2 or 3 times a day and that would just be a deal breaker. I will add that if I forget to restart the server, SHFS will drive my RAM usage to 100% then it eventually stops running and i guess the process kills itself, it no longer shows in htop and ram used drops back to normal. Once this happens all my shares disappear. At this point I am trying to decide if I should just build a new server, switch away from unraid, or use a script to restart daily.
  8. I just wanted to report I am still doing daily restarts as SHFS still eats up all my memory. If there is any other data I can provide let me know. For the time being I have docker running so the kids can watch their movies while we are stuck at home.
  9. Wanted to add another diagnostics and screen shot of htop showing the memory usage. Here you can see unraid running with Docker turned off, no plugins installed and no VMs. Fresh USB drive also. It is using 55GB of memory, the only thing I did was start the array. To my knowledge nothing else is running. Any help would be appreciated. tower-diagnostics-20200402-2057.zip
  10. Hello everyone. I am submitting this bug report at the request of a moderator in the general support section. I had a thread going over there trying to figure this out but we hit a dead end. I have run into the problem where the process SHFS consumes all of my available memory over time. I have a total of 192GB installed which shows as 189GB available to unraid. It takes around 24 hours for SHFS to consume all that memory, then SHFS will crash and all shares disappear until a restart. Booting in safe mode exhibits the same issue. I have been observing this issue by opening a terminal window and running the htop command. I then sort by mem% which brings multiple instances of SHFS to the top. Then I just sit and watch as it gradually consumes more and more memory. I was instructed in the general support thread to redo my flash drive which I did. The flash drive was formatted and recreated and only the super.dat and key file were restored from the old config file as instructed. Attached are a screen shot of htop showing the shfs instances that were running and also diagnostics from last night. This is a completely clean OS install (using existing data disks). No plugins installed, Docker is set of off, and No VMS. Any help would be greatly appreciated! I didn't know what priority to set, its urgent for me but I am probably the only person with this issue so that makes it not so urgent for the community. Feel free to change it. tower-diagnostics-20200330-2034.zip
  11. I will, thank you for help provided so far!
  12. Ok, I restarted with docker set to off, no plugins installed, and no VMs. After starting the array i saw 1.8G of memory being used. 30 minutes in it was at 4.59G. 40 mins it was at 6.06G. And at 1 hour it was at 8.36G of memory used. So a steady increase. I beleive this would go on for about 24 hours until it maxes out the total 189G available. I attached a screenshot of htop again sorted by mem% and also added another diagnostics file. Any ideas or help is greatly appreciated and thank you for the help so far. tower-diagnostics-20200330-2034.zip
  13. I guess I thought we had already ruled anything docker and plugin related out as the cause. I will restart with docker set to off and see if that changes anything with the newly created USB.
  14. I shutdown the server, pulled the USB and put it in my windows PC. I copied the config file to my desktop then formatted the drive. I downloaded the USB creator tool from the unraid website and created a new USB with 6.8.3. I then copied over the super.dat file and the key file from the old config folder to the new config folder. Put the USB back in my server and started it up. After starting the array manually I get the same behavior. Several instances of shfs are running again and slowly consuming memory. Its only been running for 40 mins so total mem% is still low but the trend is there as it was before. So we have rulled out plugins, dockers, and the USB drive. It only happens when the array is started. Another issue I have discovered is if I try to stop the Array the system hangs. I get a message in the bottom left of the browser that says "Array Stopping Retry unmounting user share(s)...) I guess this related to the shfs processes not stopping or hanging.
  15. booted into safe mode with web GUI. With the array not started everything ran as expected. System idling with shfs not showing up. As soon as I started they array, shfs processes started and ram utilization has been slowing going up. I know shfs is a normal process and is expected to run. However i have multiple instances of it and they all slowing eat up the ram. Again this is in safe mode with web GUI enabled. I am observing all this by opening a terminal window and running htop and sorting by mem%. I assume this is an acceptable way to monitor this. So it seems I have an underlying OS problem? Or could a hardware issue cause this. There are alot of ram modules in this server. I purchased it off ebay as is. 192GB ram total. I haven't been in the case recently or changed anything hardware wise. Just thinking out loud here and trying to give as much relevent information as possible.