Jump to content

Phoenix Down

Members
  • Posts

    134
  • Joined

  • Last visited

Posts posted by Phoenix Down

  1. 57 minutes ago, john_smith said:

    i'm reading the log now and seeing something in red I don't recall seeing before:
     

    Dec  4 12:20:04 HTPC emhttpd: /mnt/disk1 mount error: Volume not encrypted

    Not sure; I did not run into this issues for any of my 8 disks. Maybe there's a hint in this thread:

     

     

  2. 18 hours ago, john_smith said:

    I could move everything to my new large drive and try that one first, then I would subsequently want to move the files back to the small old drive in order to try encrypting the large empty drive again. 

     

    Is there something that would change from doing that with regards to encrypting the large new disk?

    Just trying to eliminate variables, see if it’s an issue with your large disk or something else. 

  3. 6 hours ago, john_smith said:

    here are the logs:

      Reveal hidden contents

    Dec  1 17:33:19 HTPC kernel: mdcmd (52): nocheck pause
    Dec  1 17:33:21 HTPC emhttpd: creating volume: disk2 (xfs - encrypted)
    Dec  1 17:33:21 HTPC emhttpd: shcmd (137720): /sbin/wipefs -a /dev/sdb
    Dec  1 17:33:22 HTPC root: /dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
    Dec  1 17:33:22 HTPC root: /dev/sdb: 8 bytes were erased at offset 0x1230bffffe00 (gpt): 45 46 49 20 50 41 52 54
    Dec  1 17:33:22 HTPC root: /dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
    Dec  1 17:33:22 HTPC root: /dev/sdb: calling ioctl to re-read partition table: Success
    Dec  1 17:33:22 HTPC emhttpd: writing GPT on disk (sdb), with partition 1 byte offset 32KiB, erased: 0
    Dec  1 17:33:22 HTPC emhttpd: shcmd (137721): sgdisk -Z /dev/sdb
    Dec  1 17:33:23 HTPC root: Creating new GPT entries in memory.
    Dec  1 17:33:23 HTPC root: GPT data structures destroyed! You may now partition the disk using fdisk or
    Dec  1 17:33:23 HTPC root: other utilities.
    Dec  1 17:33:23 HTPC emhttpd: shcmd (137722): sgdisk -o -a 8 -n 1:32K:0 /dev/sdb
    Dec  1 17:33:24 HTPC root: Creating new GPT entries in memory.
    Dec  1 17:33:24 HTPC root: The operation has completed successfully.
    Dec  1 17:33:24 HTPC kernel: sdb: sdb1
    Dec  1 17:33:24 HTPC emhttpd: shcmd (137723): udevadm settle
    Dec  1 17:33:24 HTPC emhttpd: mounting /mnt/disk2
    Dec  1 17:33:24 HTPC emhttpd: shcmd (137724): mkdir -p /mnt/disk2
    Dec  1 17:33:24 HTPC emhttpd: /mnt/disk2 mount error: Volume not encrypted
    Dec  1 17:33:24 HTPC emhttpd: shcmd (137725): rmdir /mnt/disk2
    Dec  1 17:33:24 HTPC emhttpd: Starting services...
    Dec  1 17:33:24 HTPC emhttpd: shcmd (137729): /etc/rc.d/rc.samba restart
    Dec  1 17:33:24 HTPC wsdd2[23016]: 'Terminated' signal received.
    Dec  1 17:33:24 HTPC winbindd[23019]: [2023/12/01 19:33:24.464016,  0] ../../source3/winbindd/winbindd_dual.c:1950(winbindd_sig_term_handler)
    Dec  1 17:33:24 HTPC winbindd[23019]:   Got sig[15] terminate (is_parent=1)
    Dec  1 17:33:24 HTPC wsdd2[23016]: terminating.
    Dec  1 17:33:24 HTPC winbindd[23022]: [2023/12/01 19:33:24.464336,  0] ../../source3/winbindd/winbindd_dual.c:1950(winbindd_sig_term_handler)
    Dec  1 17:33:24 HTPC winbindd[23022]:   Got sig[15] terminate (is_parent=0)
    Dec  1 17:33:24 HTPC winbindd[24197]: [2023/12/01 19:33:24.466769,  0] ../../source3/winbindd/winbindd_dual.c:1950(winbindd_sig_term_handler)
    Dec  1 17:33:24 HTPC winbindd[24197]:   Got sig[15] terminate (is_parent=0)
    Dec  1 17:33:26 HTPC root: Starting Samba:  /usr/sbin/smbd -D
    Dec  1 17:33:26 HTPC smbd[25806]: [2023/12/01 19:33:26.673575,  0] ../../source3/smbd/server.c:1741(main)
    Dec  1 17:33:26 HTPC smbd[25806]:   smbd version 4.17.10 started.
    Dec  1 17:33:26 HTPC smbd[25806]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Dec  1 17:33:26 HTPC root:                  /usr/sbin/wsdd2 -d -4
    Dec  1 17:33:26 HTPC root:                  /usr/sbin/winbindd -D
    Dec  1 17:33:26 HTPC wsdd2[25823]: starting.
    Dec  1 17:33:26 HTPC winbindd[25824]: [2023/12/01 19:33:26.791907,  0] ../../source3/winbindd/winbindd.c:1440(main)
    Dec  1 17:33:26 HTPC winbindd[25824]:   winbindd version 4.17.10 started.
    Dec  1 17:33:26 HTPC winbindd[25824]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Dec  1 17:33:26 HTPC winbindd[25826]: [2023/12/01 19:33:26.796899,  0] ../../source3/winbindd/winbindd_cache.c:3117(initialize_winbindd_cache)
    Dec  1 17:33:26 HTPC winbindd[25826]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Dec  1 17:33:26 HTPC emhttpd: shcmd (137733): /etc/rc.d/rc.avahidaemon restart
    Dec  1 17:33:26 HTPC root: Stopping Avahi mDNS/DNS-SD Daemon: stopped
    Dec  1 17:33:26 HTPC avahi-daemon[23073]: Got SIGTERM, quitting.
    Dec  1 17:33:26 HTPC avahi-dnsconfd[23084]: read(): EOF

     

    Have you encrypted any other disks in your system? Were they successful?

  4. 4 hours ago, john_smith said:

     

    thanks for the quick reply. i did start it back up after changing the disk type, that's when i was presented with the format option near the bottom of the page. from there:

    1. I check the checkbox, and and select OK on the popup
    2. I select Format, the page reloads
    3. "Started, formatting..." shows up for a few seconds
    4. The page reloads again,
    5. The disk shows up as unmounted and unencrypted, with the test to the right showing: "Unmountable: Volume not encrypted", remaining with the XFS file system

    Check the system logs (top right, left of the solid circle with a question mark). That sounds like the format was unsuccessful for some reason.

  5. 3 hours ago, john_smith said:

    posting here since it seems to be the most recent post on the topic - i also tried following the space invader one video, and have tried a handful of other things as well since i'm having trouble (including referencing the simpler steps on https://docs.unraid.net/unraid-os/manual/security/data-encryption/

     

    However, I'm still having trouble encrypting a new 20TB disk i'm trying to add. these are the steps i'm following:

    • Go to the Main tab.
    • Stop the array.
    • Select the drive.
    • In File system type change the file system to the encrypted type that you want.
    • Select Apply to commit the change.
    • Select Done to return to the Main tab.
    • The drive now shows as unmountable and the option to format unmountable drives is present.
    • I check the checkbox, and and select OK on the popup
    • I select Format, the page reloads
    • "Started, formatting..." shows up for a few seconds
    • The page reloads again, the disk remains unmounted and unencrypted, showing: "Unmountable: Volume not encrypted" and remains with the XFS file system

    am i missing something?

     

    See my reply above:

     

    Quote

    Once you've emptied a drive using Unbalance, you just have to stop the array, and then change the disk type of the disk you just emptied to "XFS Encrypted", then start the array back up. Lastly, format the disk and that disk is converted.

     

    Did you start the array back up after you changed the disk type to "XFS Encrypted"?

     

  6. 1 hour ago, pervin_1 said:

    I am on the 12.4, and updated the compatibility code with my own checksums, and it got aborted upon the boot

     

    Error: RAM-Disk Mod found incompatible files: 45361157ef841f9a32a984b056da0564 /etc/rc.d/rc.docker 9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor

     

     

    Check to make sure you copy and pasted the correct checksums into the RAM disk script. Maybe you missed a digit at the end.

  7. 2 hours ago, pervin_1 said:

    Ran two md5sum checks on rc.docker and scripts monitor, the checksums are different. Which means I should change the compatibility code, correct? 

     

    Thank you!

    If your Unraid version is different than @Mainfrezzer's code example, then it's possible those files are also different. So yes, you should update the RAM DIsk code with your own checksums. Note that doing this doesn't mean the RAM Disk code won't have any issues, just that the code won't abort itself. It might or might not have any issues, but we don't know until someone tries it out whenever a few version of Unraid comes out.

  8. 16 hours ago, pervin_1 said:

    Wondering if anyone can help me with questions above? Thanks! 

     

    If you look at this line:

    echo -e "45361157ef841f9a32a984b056da0564 /etc/rc.d/rc.docker\n9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor" | md5sum --check --status && compatible=1

     

    There are two files in the format of "md5_checksum path_to_file":

    45361157ef841f9a32a984b056da0564 /etc/rc.d/rc.docker
    
    9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor

     

    You can use md5sum to generate new checksums on those two files, like:

    md5sum /etc/rc.d/rc.docker

     

    Sounds like @Mainfrezzer's code already includes the new checksums for 6.12.4. But you can double check yourself if you want.

  9. Just wanted to say thank you @mgutt! I had an earlier version of your RAM Disk installed. Wasn't even sure of the version, as it didn't have the version number in the comments. Must have been from a couple of years ago. In any case, after upgrading to 6.11 recently, I noticed that my Disk 1 would have writes to it frequently, which wakes it up along with my two parity disks. This is despite me emptying all contents from Disk 1, and all of my Dockers running from the SSD cache pool. Also, new writes should have gone to the cache pool and not the array. I spent hours watching iotop and dstat, and I was about to pull my hair out when I noticed that Disk 1 would only wake up when certain Docker container is running (specifically DiskSpeed and HomeAssistant). On a whim, I looked to see if there is a newer version of the RAM Disk available, and found this thread. I updated the RAM Disk code, and viola! No more disks waking up! Still not sure why certain Dockers are writing directly to the array or why it's always Disk 1, but I'm glad the new code fixed the issue :)

  10. 6 minutes ago, JorgeB said:

    Yes, Unraid decrypts the devices with --allow-discards.

    Awesome, thanks Jorge! Can I assume that this option is enabled by default?

     

    When did this option get enabled in Unraid? I found some posts as early as 2-3 years ago that still said TRIM is not supported on encrypted SSDs on Unraid.

  11. 1 hour ago, JonathanM said:

    Most likely, since the underlying mechanism hasn't changed much. It's the same process as the disk format conversion, and there are multiple different ways to accomplish it, but they all involve getting one drive empty, formatting it, copying data from the next drive to be formatted, etc.

     

    BTW, if you are planning to run encryption, make sure your backup strategy is complete and as foolproof as you can get it, because if you have an issue, recovering data from encrypted volumes is orders of magnitude harder, so it's best to just restore from backup if you have an issue. Parity is NOT a backup, it's realtime, so if something corrupts your data, or a drive fails and one of the other drives has an issue while rebuilding the failed drive, you are sunk without backups.

    Understood, thanks for the reminder 🙂

     

    Turns out it's a bit easier than the video. Once you've emptied a drive using Unbalance, you just have to stop the array, and then change the disk type of the disk you just emptied to "XFS Encrypted", then start the array back up. Lastly, format the disk and that disk is converted.

  12. On 11/26/2022 at 11:53 PM, Phoenix Down said:

    Hi @bonienl, is this the right channel to report a bug? If not, please point me in the right direction :) 

     

    I've been noticing an issue with Autofan in the last couple of months. It seems like whenever all of my HDDs are spun down and only my NVME cache drives are still active, Autofan gets confused and thinks there is no active drives, and shuts down all of my case fans. This causes my NVME drives to get pretty hot. After digging through the Autofan code, I discovered the issue in function_get_highest_hd_temp():

     

    function_get_highest_hd_temp() {
      HIGHEST_TEMP=0
      [[ $(version $version) -ge $(version "6.8.9") ]] && HDD=1 || HDD=
      for DISK in "${HD[@]}"; do
        # Get disk state using sdspin (new) or hdparm (legacy)
        
        ########## PROBLEM HERE ########## [[ -n $HDD ]] && SLEEPING=`sdspin ${DISK}; echo $?` || SLEEPING=`hdparm -C ${DISK}|grep -c standby`
        
        ########## Fix is below ##########
        [[ -n $HDD ]] && SLEEPING=`hdparm -C ${DISK} |& grep -c standby`
        ##################################
        
        echo Disk: $DISK - Sleep: $SLEEPING
        if [[ $SLEEPING -eq 0 ]]; then
          if [[ $DISK == /dev/nvme[0-9] ]]; then
            CURRENT_TEMP=$(smartctl -n standby -A $DISK | awk '$1=="Temperature:" {print $2;exit}')
          else
            CURRENT_TEMP=$(smartctl -n standby -A $DISK | awk '$1==190||$1==194 {print $10;exit} $1=="Current"&&$3=="Temperature:" {print $4;exit}')
          fi
          if [[ $HIGHEST_TEMP -le $CURRENT_TEMP ]]; then
            HIGHEST_TEMP=$CURRENT_TEMP
          fi
        fi
      done
    
      echo Highest Temp: $HIGHEST_TEMP
    }

     

    Check out the line I marked ########## PROBLEM HERE ##########. Specifically middle condition (sdspin).

     

    [[ -n $HDD ]] && SLEEPING=`sdspin ${DISK}; echo $?` || SLEEPING=`hdparm -C ${DISK}|grep -c standby`

     

    "sdspin" is a shell script that runs hdparm -C on the NVME device. Here's the contents of sdspin:

     

    # cat /usr/local/sbin/sdspin 
    #!/bin/bash
    
    # spin device up or down or get spinning status
    # $1 device name
    # $2 up or down or status
    # ATA only
    
    # hattip to Community Dev @doron
    
    RDEVNAME=/dev/${1#'/dev/'}      # So that we can be called with either "sdX" or "/dev/sdX"
    
    hdparm () {
      OUTPUT=$(/usr/sbin/hdparm $1 $RDEVNAME 2>&1)
      RET=$?
      [[ $RET == 0 && ${OUTPUT,,} =~ "bad/missing sense" ]] && RET=1
    }
    
    if [[ "$2" == "up" ]]; then
      hdparm "-S0"
    elif [[ "$2" == "down" ]]; then
      hdparm "-y"
    else
      hdparm "-C"
      [[ $RET == 0 && ${OUTPUT,,} =~ "standby" ]] && RET=2
    fi

     

    If I run the command directly:

     

    # hdparm -C /dev/nvme0
    
    /dev/nvme0:
     drive state is:  unknown
    
    # echo $?
    25

     

    This the same exit code that sdspin returns:

     

    # sdspin /dev/nvme0
    
    # echo $?
    25

     

    My cache drives consists of 2x Silicon Power P34A80 1TB m.2 NVME drives. Apparently hdparm cannot get their power state, and because sdspin is looking for the word "standby", it never finds it. More importantly, the middle (sdspin) condition always sets $SLEEPING to sdspin's exit code, which is 25 in this case. And because 25 is not zero, this causes the script to think all disks are in standby mode (even though my NVME drives are still active), thus causing Autofan to shut off all case fans.

     

    My fix is simple: remove the middle condition:

     

    [[ -n $HDD ]] && SLEEPING=`hdparm -C ${DISK} |& grep -c standby`

     

    Because the last condition is looking specifically for the word "standby" and not just taking the exit code, it works. This is because hdparm says my NVME drive's state is in "unknown", which is not "standby". That means the script correctly considers the NVME drive as NOT in standby.

     

    I've locally modified the Autofan script and it's been running correctly for a few weeks. Unfortunately my local changes gets wiped out every time I reboot the server, so I'd appreciate it if you or the author can update the script to fix this bug.

     

    Thanks in advance!

    Is there anyone actively maintaining the Autofan plug-in? I've already presented the fix. Just need the maintainer to merge in the one-line code change.

  13. Hi @bonienl, is this the right channel to report a bug? If not, please point me in the right direction :) 

     

    I've been noticing an issue with Autofan in the last couple of months. It seems like whenever all of my HDDs are spun down and only my NVME cache drives are still active, Autofan gets confused and thinks there is no active drives, and shuts down all of my case fans. This causes my NVME drives to get pretty hot. After digging through the Autofan code, I discovered the issue in function_get_highest_hd_temp():

     

    function_get_highest_hd_temp() {
      HIGHEST_TEMP=0
      [[ $(version $version) -ge $(version "6.8.9") ]] && HDD=1 || HDD=
      for DISK in "${HD[@]}"; do
        # Get disk state using sdspin (new) or hdparm (legacy)
        
        ########## PROBLEM HERE ########## [[ -n $HDD ]] && SLEEPING=`sdspin ${DISK}; echo $?` || SLEEPING=`hdparm -C ${DISK}|grep -c standby`
        
        ########## Fix is below ##########
        [[ -n $HDD ]] && SLEEPING=`hdparm -C ${DISK} |& grep -c standby`
        ##################################
        
        echo Disk: $DISK - Sleep: $SLEEPING
        if [[ $SLEEPING -eq 0 ]]; then
          if [[ $DISK == /dev/nvme[0-9] ]]; then
            CURRENT_TEMP=$(smartctl -n standby -A $DISK | awk '$1=="Temperature:" {print $2;exit}')
          else
            CURRENT_TEMP=$(smartctl -n standby -A $DISK | awk '$1==190||$1==194 {print $10;exit} $1=="Current"&&$3=="Temperature:" {print $4;exit}')
          fi
          if [[ $HIGHEST_TEMP -le $CURRENT_TEMP ]]; then
            HIGHEST_TEMP=$CURRENT_TEMP
          fi
        fi
      done
    
      echo Highest Temp: $HIGHEST_TEMP
    }

     

    Check out the line I marked ########## PROBLEM HERE ##########. Specifically middle condition (sdspin).

     

    [[ -n $HDD ]] && SLEEPING=`sdspin ${DISK}; echo $?` || SLEEPING=`hdparm -C ${DISK}|grep -c standby`

     

    "sdspin" is a shell script that runs hdparm -C on the NVME device. Here's the contents of sdspin:

     

    # cat /usr/local/sbin/sdspin 
    #!/bin/bash
    
    # spin device up or down or get spinning status
    # $1 device name
    # $2 up or down or status
    # ATA only
    
    # hattip to Community Dev @doron
    
    RDEVNAME=/dev/${1#'/dev/'}      # So that we can be called with either "sdX" or "/dev/sdX"
    
    hdparm () {
      OUTPUT=$(/usr/sbin/hdparm $1 $RDEVNAME 2>&1)
      RET=$?
      [[ $RET == 0 && ${OUTPUT,,} =~ "bad/missing sense" ]] && RET=1
    }
    
    if [[ "$2" == "up" ]]; then
      hdparm "-S0"
    elif [[ "$2" == "down" ]]; then
      hdparm "-y"
    else
      hdparm "-C"
      [[ $RET == 0 && ${OUTPUT,,} =~ "standby" ]] && RET=2
    fi

     

    If I run the command directly:

     

    # hdparm -C /dev/nvme0
    
    /dev/nvme0:
     drive state is:  unknown
    
    # echo $?
    25

     

    This the same exit code that sdspin returns:

     

    # sdspin /dev/nvme0
    
    # echo $?
    25

     

    My cache drives consists of 2x Silicon Power P34A80 1TB m.2 NVME drives. Apparently hdparm cannot get their power state, and because sdspin is looking for the word "standby", it never finds it. More importantly, the middle (sdspin) condition always sets $SLEEPING to sdspin's exit code, which is 25 in this case. And because 25 is not zero, this causes the script to think all disks are in standby mode (even though my NVME drives are still active), thus causing Autofan to shut off all case fans.

     

    My fix is simple: remove the middle condition:

     

    [[ -n $HDD ]] && SLEEPING=`hdparm -C ${DISK} |& grep -c standby`

     

    Because the last condition is looking specifically for the word "standby" and not just taking the exit code, it works. This is because hdparm says my NVME drive's state is in "unknown", which is not "standby". That means the script correctly considers the NVME drive as NOT in standby.

     

    I've locally modified the Autofan script and it's been running correctly for a few weeks. Unfortunately my local changes gets wiped out every time I reboot the server, so I'd appreciate it if you or the author can update the script to fix this bug.

     

    Thanks in advance!

    • Thanks 1
    • Upvote 1
  14. On 10/15/2021 at 1:21 PM, spants said:

    I have created a new docker template for Octoprint using the official docker images and supporting webcam streaming.

     

    • You need video drivers installed on unRaid for the kernel to see a camera.
      • (on Version: 6.10.0-rc1) - install DVB Drivers in Community Apps and select LibreELEC
      • This will need a reboot after installation
         
    • plug in your camera, you should see /dev/video0 appear in a terminal session on unRaid
       
    • Install OctoPrint-Spants
      • add the following:
        • variables: 
          • ENABLE_MJPG_STREAMER         true
          • CAMERA_DEV                            /dev/video0
          • MJPG_STREAMER_INPUT           -y -n -r 640x480 (can change to suit)

        • port:

          • webcam           container port 80      host port 5003

          • snapshot          container port 8080  host port 5004

             

    • In octoprint's webcam settings

      • set the stream url to     http://IPADDRESS:5003/webcam/?action=stream and test - it should work

      • set the snapshot url to       http://IPADDRESS:5004/?action=snapshot and test

      • IPADDRESS is your unraid server address if you used bridge networking

     

    octosettings.thumb.png.bbdf0a18012214934f908edb8a5854d1.png

     

    Thanks for the new docker! I'm still on the old nunofgs docker and want to migrate to your new version. However, I have a ton of plugins and customizations. What's the best way to bring them over to the new docker? Should I just copy everything over from /mnt/user/appdata/octoprint?

  15. 6 minutes ago, Sloppy said:

    Thanks for the info guys, yes i blew it up already trying to update lol. had to re-install. the only problem with the other docker that is frequently updated is it doesnt support the easy webcam setup. @Phoenix Down if you migrate over and get the webcam working please let me know how. 

    For sure, when I do the migration, I'll post an update in this thread.

    • Like 1
  16. 3 hours ago, Sloppy said:

    @Tergi Hello just wondering what version of octoprint you are using? https://hub.docker.com/r/nunofgs/octoprint/ hasnt been updated in a while. is there a way to update octoprint? Thanks

    That's what I'm using as well. I noticed the same thing - it hasn't been updated for a long time. There is another Octoprint docker in Community Apps that seems to be frequently updated. I plan to migrate over, but haven't had the time to do it.

     

    10 minutes ago, Tergi said:

    Hi @sloppy, I am using his default install. I am not doing as much 3d printing as i used to. The octoprint has an updater in it... when i tried it once it blew up everything. if you try it, then you will probably want to backup your configs.

    You can update all of the plugins EXCEPT for Octoprint itself. That will blow everything up, as you saw.

  17. 26 minutes ago, gigo90 said:

    First of all, thanks for the information.....now the further question: is there a web gui for the plugin. In the first pages of this thread, the author spoke about a gui but i couldn't find specific information...

    The plugin has no GUI. Command line only.

  18. 13 hours ago, gigo90 said:

    Hi all! I got the official rclone docker from dockerhub (rclone/rclone). The installation went fine, i have rclone docker in the unraid docker list. Unfortunatelly it doesn't want to start. If I try to start it manually, it stop immidiatly. I would like to run the docker instead of the plugin, in order to use the web gui because i'm not familiar with rclone plugin and command line. thk

    I recommend you use the plugin instead. I've given up on trying to get the Docker to work properly. It's also no longer being maintained while the plugin is.

  19. 1 hour ago, aurevo said:

     

    A quick question in between, do you own much x265 material?

    If so, Jellyfin has problems playing these files directly and transcodes them anyway.

     

    Do you have the possibility to try Jellyfin on a Fire TV Stick or Apple TV? Personally I use the Fire TV Stick and have no problems, but I find the Android app very buggy and not reliable. In the browser I also have to struggle with bad performance from time to time and therefore I personally use the Fire TV Stick, as I did with Plex before.

    My entire library is in HEVC. I use Infuse on Apple TV and have no issues playing them. I also tried the Jellyfin app on the iPhone, which also has no problem playing them. But for some reason, Jellyfin app activates transcoding even though my phone is capable of decoding HEVC in hardware. Infuse will always direct stream the video if the client has the hardware to decode the codec though (which Apple TV does).

×
×
  • Create New...