kernelpanic

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by kernelpanic

  1. Thank you for this @Hoopster! Transcoding with Plex has been a nightmare for me. I've tried ten ways to Sunday to get transcoding to RAM to work. Then I tried using an unassigned device which was a pain in its own right. This has fixed it for me! Thanks again.
  2. Is there any way to change the version of tmodloader that is downloaded for the Terraria-tModLoader container? Trying to run that hot off the press alpha. The only option I see is to change the target game version.
  3. @cisellis So is the initial problem here that your cache drive was formatted as XFS? I just upgraded to 6.9 and getting no GUI. Once that's fixed, I'm hoping I don't have this same drama with the cache as mine is formatted as XFS as well.
  4. Yes, exactly. There is no redundancy with xfs so far as I can tell. I backup appdata to the share every night so if the SSD dies I’ll have backup to some degree. Would like to see the btrfs fixed at some point though.
  5. Actually, as I typed this I queued up some huge downloads. They all finished around the same time and the server took a shit. So, its not fixed, but is definitely better than it was. FWIW, I have two SSDs. One is a samsung and one is not. I think I'll move it all to the non-samsung SSD tomorrow and try again. Edit: Take that back, that was my machine kernel panicking again. I still stand by the xfs conversion being a good call.
  6. I converted my cache (had to break the pool) to xfs (which does not support pooled drives) and the issue seems to be resolved. I can't say 100% but I've not seen any slow downs since the conversion. The conversion was not exactly simple and I found the steps in the thread linked earlier in this thread. 1) Stop the array, then disable Docker service and VMs. 2) Change cache pool to single mode (if you're on 6.8.3) and allow the balance to complete. Then stop the array, unassign one of the cache disks, then restart the array. 3) The array will balance the cache again. I could tell it was done because writes stopped happening to the drive I removed from the cache. Also, the text "a btrfs operation is in progress" appeared at the bottom of the main tab by the stop array button. 4) When the text was gone I formatted the spare cache disk through unassigned devices (you must have unassigned devices+ installed) 5) Use the console to rsync the data from the main cache to the spare cache drive e.g. rsync -avrth --progress //mnt/cache/ //mnt/disks/second_ssd 6) Once the copy is done, stop the array again and format the remaining cache drive as xfs. Note: you must change the number of available cache drives to 1 for xfs to appear as a file system. 7) Once the format is complete, restart the array and copy the data back using the same sync command with the paths flipped. e.g. rsync -avrth --progress //mnt/disks/second_ssd //mnt/cache/ 8. Once the copy is done, you should be all set. Restart the Docker and VM services. If there is an easier way, I couldn't find it in the forums, but I'm glad to have this working now. I may have skipped a step above as I did this from memory, but that is the gist. I am not pleased that I no longer have 1TB of cache from two pooled 500gb SSDs, but I'd rather it function properly than not at all.
  7. Hi all, I'm having a bit of an odd issue with my server. It has been fairly stable for about 14 months, but I've had an issue since Unraid 6.8. The server seems to be grinding to a halt randomly. I typically notice an issue when new web resources stop loading on my other devices as I'm running a PiHole container on the server. Upon trying to access the Web UI, different things have happened. 1st, 2nd, and 3rd time) Web UI loaded, normal CPU utilization, but the Docker containers were stopped and would not restart. Restarting the server fixed the issue. 4th time) This was yesterday. Upon loading the web UI I could see that 14/32 cores were pegged at 100% utilization. I checked in the Docker tab and all containers were at 0% CPU utilization except for one which was at 3%. I tried: Stopping all containers and they wouldn't stop. The status wheel kept on spinning then eventually went away but the containers never stopped. I tried stopping the array and it wouldn't stop. The stop button greyed out for a but became active again after a short while. I then clicked restart in the header and the server said it was going down, but the actual server didn't shut down. After about 30 minutes of waiting, I held the power button to hard restart the server. Upon coming back up, I was able to start the array and everything was working again. 5th time) Last night the containers stopped to do a backup and never started back up. When I woke to no internet (PiHole DNS container) this morning, I checked and the containers were all stopped. I was able to manually start all of the containers with no issues. I enabled sys logging to a share on the cache drive after this happened for the 3rd time. I'll attach the sys log and diagnostic report. There appear to be a fair amount of kernel panics in the syslog. Thanks for the help! syslog-10.0.1.21.log hoth-diagnostics-20200325-1106.zip
  8. Following this thread because I've had this exact same issue for some time now. I noticed this as soon as I set up my server about 14 months ago. I added a 2nd SSD in hopes of alleviating this issue, but it had no effect.
  9. Try changing the value for NVIDIA_DRIVER_CAPABILITIES to "all" without the quotes.
  10. I am in this exact situation. Perhaps I'm missing a key piece here, which I believe is the distinction between tdarr and tdarr_aio. How would one go about using the tdarr_aio container? There is only one app listed for tdarr.
  11. Hello, I'm facing an odd issue recently that seems to be affecting some but not all of the Docker containers. I noticed that when changing settings in some Docker containers, the settings don't apply. For instance, in Deluge, if I change the enabled plug-ins, then check back a little while later, the change is undone. The same thing was happening in binhex-rtorrentvpn. To test, I removed that container and installed the linuxserver.io rTorrent container. This container (having never been installed before) has the same issue. If I turn off randomize incoming port or change the default downloads directory and apply the changes, they'll be back to the way they originally were once I restart the container. Both containers have PUID = 99 and PGID = 100. The appdata share is shared over SMB as private. Fix Common Issues comes back with nothing other than permissions errors on a different share (an obviously doesn't check permissions on the appdata share). If I directly edit the config files in Windows, the changes will persist which leads me to believe the containers are not able to write to the /config directory. Sonarr, Radarr, UniFi, and Plex seem to have no issues saving settings, updating their respective db, etc. Any help is greatly appreciated. hoth-diagnostics-20190620-2227.zip