Jump to content

kernelpanic

Members
  • Posts

    12
  • Joined

  • Last visited

Posts posted by kernelpanic

  1. On 10/13/2022 at 7:37 PM, Hoopster said:

    This could be a problem.  I used to have mine set up this way but without limiting transcoding RAM in some way, tmp will eventually fill up all available RAM completely especially if there are multiple transcoding sessions going on simultaneously.

     

    I also have 64GB RAM on my server but have limited transcoding to using 16GB RAM. 

     

    I have this in my go file to limit transcoding to 16GB and to recreate the folder in RAM on reboot:

    mkdir /tmp/PlexRamScratch
    chmod -R 777 /tmp/PlexRamScratch
    mount -t tmpfs -o size=16g tmpfs /tmp/PlexRamScratch

     

    With this, the mapping in the Plex docker container is /transcode to /tmp/PlexRamScratch

     

    This forces Plex to reclaim space by deleting older transcoded bits that have already played rather than just waiting to delete them all when the entire transcode finishes.

     

    Thank you for this @Hoopster! Transcoding with Plex has been a nightmare for me. I've tried ten ways to Sunday to get transcoding to RAM to work. Then I tried using an unassigned device which was a pain in its own right. This has fixed it for me! Thanks again.

  2. 1 hour ago, CowboyRedBeard said:

    You can't have redundant cache (pool) with XFS...

    https://wiki.unraid.net/UnRAID_6/Storage_Management#Switching_the_cache_to_pool_mode

     

    That's a problem... I suppose this is something I could try, but don't see lack of a pool as a viable option going forward.

    Yes, exactly. There is no redundancy with xfs so far as I can tell. I backup appdata to the share every night so if the SSD dies I’ll have backup to some degree. Would like to see the btrfs fixed at some point though.

  3. 13 minutes ago, kernelpanic said:

     

    I converted my cache (had to break the pool) to xfs (which does not support pooled drives) and the issue seems to be resolved. I can't say 100% but I've not seen any slow downs since the conversion. The conversion was not exactly simple and I found the steps in the thread linked earlier in this thread.

     

    1) Stop the array, then disable Docker service and VMs.

    2) Change cache pool to single mode (if you're on 6.8.3) and allow the balance to complete. Then stop the array, unassign one of the cache disks, then restart the array.

    3) The array will balance the cache again. I could tell it was done because writes stopped happening to the drive I removed from the cache. Also, the text "a btrfs operation is in progress" appeared at the bottom of the main tab by the stop array button.

    4) When the text was gone I formatted the spare cache disk through unassigned devices (you must have unassigned devices+ installed)

    5) Use the console to rsync the data from the main cache to the spare cache drive e.g.

    rsync -avrth --progress //mnt/cache/ //mnt/disks/second_ssd

    6) Once the copy is done, stop the array again and format the remaining cache drive as xfs. Note: you must change the number of available cache drives to 1 for xfs to appear as a file system.

    7) Once the format is complete, restart the array and copy the data back using the same sync command with the paths flipped.

    e.g. rsync -avrth --progress //mnt/disks/second_ssd //mnt/cache/

    8. Once the copy is done, you should be all set. Restart the Docker and VM services.

     

    If there is an easier way, I couldn't find it in the forums, but I'm glad to have this working now. I may have skipped a step above as I did this from memory, but that is the gist. I am not pleased that I no longer have 1TB of cache from two pooled 500gb SSDs, but I'd rather it function properly than not at all.

    Actually, as I typed this I queued up some huge downloads. They all finished around the same time and the server took a shit. So, its not fixed, but is definitely better than it was. FWIW, I have two SSDs. One is a samsung and one is not. I think I'll move it all to the non-samsung SSD tomorrow and try again.

     

    Edit: Take that back, that was my machine kernel panicking again. I still stand by the xfs conversion being a good call.

  4. 8 minutes ago, CowboyRedBeard said:

    No it did not happen when copying to the Optane drive, however it is formatted to XFS. Maybe that is part of the issue?

     

    I wonder if there's an easy way for me to convert my cache pool to XFS and then try?

     

    I converted my cache (had to break the pool) to xfs (which does not support pooled drives) and the issue seems to be resolved. I can't say 100% but I've not seen any slow downs since the conversion. The conversion was not exactly simple and I found the steps in the thread linked earlier in this thread.

     

    1) Stop the array, then disable Docker service and VMs.

    2) Change cache pool to single mode (if you're on 6.8.3) and allow the balance to complete. Then stop the array, unassign one of the cache disks, then restart the array.

    3) The array will balance the cache again. I could tell it was done because writes stopped happening to the drive I removed from the cache. Also, the text "a btrfs operation is in progress" appeared at the bottom of the main tab by the stop array button.

    4) When the text was gone I formatted the spare cache disk through unassigned devices (you must have unassigned devices+ installed)

    5) Use the console to rsync the data from the main cache to the spare cache drive e.g.

    rsync -avrth --progress //mnt/cache/ //mnt/disks/second_ssd

    6) Once the copy is done, stop the array again and format the remaining cache drive as xfs. Note: you must change the number of available cache drives to 1 for xfs to appear as a file system.

    7) Once the format is complete, restart the array and copy the data back using the same sync command with the paths flipped.

    e.g. rsync -avrth --progress //mnt/disks/second_ssd //mnt/cache/

    8. Once the copy is done, you should be all set. Restart the Docker and VM services.

     

    If there is an easier way, I couldn't find it in the forums, but I'm glad to have this working now. I may have skipped a step above as I did this from memory, but that is the gist. I am not pleased that I no longer have 1TB of cache from two pooled 500gb SSDs, but I'd rather it function properly than not at all.

  5. Hi all,

     

    I'm having a bit of an odd issue with my server. It has been fairly stable for about 14 months, but I've had an issue since Unraid 6.8. The server seems to be grinding to a halt randomly. I typically notice an issue when new web resources stop loading on my other devices as I'm running a PiHole container on the server. Upon trying to access the Web UI, different things have happened.

     

    1st, 2nd, and 3rd time) Web UI loaded, normal CPU utilization, but the Docker containers were stopped and would not restart. Restarting the server fixed the issue.

     

    4th time) This was yesterday. Upon loading the web UI I could see that 14/32 cores were pegged at 100% utilization. I checked in the Docker tab and all containers were at 0% CPU utilization except for one which was at 3%.

    I tried:

    • Stopping all containers and they wouldn't stop. The status wheel kept on spinning then eventually went away but the containers never stopped. 
    • I tried stopping the array and it wouldn't stop. The stop button greyed out for a but became active again after a short while. 
    • I then clicked restart in the header and the server said it was going down, but the actual server didn't shut down.
    • After about 30 minutes of waiting, I held the power button to hard restart the server.

    Upon coming back up, I was able to start the array and everything was working again.

     

    5th time) Last night the containers stopped to do a backup and never started back up. When I woke to no internet (PiHole DNS container) this morning, I checked and the containers were all stopped. I was able to manually start all of the containers with no issues.

     

    I enabled sys logging to a share on the cache drive after this happened for the 3rd time. I'll attach the sys log and diagnostic report. There appear to be a fair amount of kernel panics in the syslog.

     

    Thanks for the help!

    syslog-10.0.1.21.log hoth-diagnostics-20200325-1106.zip

  6. 3 minutes ago, controlol said:

    I inserted my new 1050ti today but for some reason the hw transcoding doesn't work. For plex it does work :)

    I get this error: 

    The transcoder does work with the exact same config but with h265 cpu encoding instead of nvenc.
    The settings for my tdarr container is attached to this post.

    tdarr settings.png

    Try changing the value for NVIDIA_DRIVER_CAPABILITIES to "all" without the quotes.

  7. 5 hours ago, nicksphone said:

    I have noticed using the nvenc plugin cpu usage maxed out on the pinned cores so i dont think its using the 1050ti to encode even though  i added NVIDIA_DRIVER_CAPABILITIES and NVIDIA_VISIBLE_DEVICES to the docker. 

    I am in this exact situation. Perhaps I'm missing a key piece here, which I believe is the distinction between tdarr and tdarr_aio. How would one go about using the tdarr_aio container? There is only one app listed for tdarr.

  8. Hello,

     

    I'm facing an odd issue recently that seems to be affecting some but not all of the Docker containers.

     

    I noticed that when changing settings in some Docker containers, the settings don't apply. For instance, in Deluge, if I change the enabled plug-ins, then check back a little while later, the change is undone. The same thing was happening in binhex-rtorrentvpn. To test, I removed that container and installed the linuxserver.io rTorrent container. This container (having never been installed before) has the same issue. If I turn off randomize incoming port or change the default downloads directory and apply the changes, they'll be back to the way they originally were once I restart the container. Both containers have PUID = 99 and PGID = 100. The appdata share is shared over SMB as private. Fix Common Issues comes back with nothing other than permissions errors on a different share (an obviously doesn't check permissions on the appdata share).

     

    If I directly edit the config files in Windows, the changes will persist which leads me to believe the containers are not able to write to the /config directory.

     

    Sonarr, Radarr, UniFi, and Plex seem to have no issues saving settings, updating their respective db, etc.

     

    Any help is greatly appreciated.

    hoth-diagnostics-20190620-2227.zip

×
×
  • Create New...