Coke84

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by Coke84

  1. Thanks @trurl but do I explicitly have to mention, that I cannot pull the diagnostics using the GUI as described in the link as I don't have access on the GUI anymore?
  2. Hi, after having some issues with Nextcloud and MariaDB respectively, I rebooted my server the first time after months. Now, I have a lot of issues. After boot up I was not able to directly access the local IP (192.168.178.100) anymore. I always got the redirect to the hash.unraid.net page with a SSL error, regardless of whether I used the https://192.168.178.100 or http://.... Which ended up in disabling SSL in ident.cfg under /boot/config (USE_SSL="no"). After a reboot, the redirect to the hash.unraid.net is gone. I still cannot access with https:// (makes sense) but I get the login page with http://.... And that's it. I can use my credentials, but nothing happens. It again and again reloads the entry page to login. SSL, WebIfs from my Docker application etc., all is working fine with except for the login to unraid webif itself. Some ideas? Thanks
  3. Is there any ETA of the new RC including this feature? Cannot install the latest updates on Win11 due to the TPM restrictions. Just want to avoid to setup / install some virtual solutions if a passthrough solution for using the onboard TPM hardware will be there in short time.
  4. This is a matter of configuration and setup. Icloudpd works very very well and downloads very reliably.
  5. Hey, since yesterday, I've experienced the issue that icloudpd does not let me convert heic to jpegs anymore. It properly downloads all files but when it comes to converting the files to jpeg, it brings me the issue that heif-convert does not exist. Sth changed in the sync-icloud.sh script in the last update? Or anybody with an idea how to fix? I've already tried to completely set up the docker from the scratch, but the issue remained. First attempt downloads and converts the files, after that the issue comes up again and conversion fails. Thank you so much! Update: was able to fix it. I did not update the new shell scripts in my dockerconfig folder which have been updated recently. As a consequnce, the healthcheck as well as the HEIC conversion was faulty. Update 2: see above. worked for the first try, second attempt and the error comes up again. update 3: the issue has been fixed and an updated Docker template has been published. Thanks for your effort! 2021-09-12 18:51:58 INFO Converting /home/user/iCloud/2021/08/IMG_6511.HEIC to /home/user/iCloud/2021/08/IMG_6511.JPG 2021-09-12 18:51:58 INFO Timestamp of HEIC file: Sat Aug 7 09:44:35 2021 2021-09-12 18:51:58 INFO Setting timestamp of /home/user/iCloud/2021/08/IMG_6511.JPG to Sat Aug 7 09:44:35 2021 2021-09-12 18:51:58 INFO Converting /home/user/iCloud/2021/08/IMG_6510.HEIC to /home/user/iCloud/2021/08/IMG_6510.JPG /usr/local/bin/sync-icloud.sh: line 504: heif-convert: not found 2021-09-12 18:51:58 INFO Timestamp of HEIC file: Sat Aug 7 09:44:32 2021 2021-09-12 18:51:58 INFO Setting timestamp of /home/user/iCloud/2021/08/IMG_6510.JPG to Sat Aug 7 09:44:32 2021 2021-09-12 18:51:58 INFO Converting /home/user/iCloud/2021/08/IMG_6509.HEIC to /home/user/iCloud/2021/08/IMG_6509.JPG /usr/local/bin/sync-icloud.sh: line 504: heif-convert: not found 2021-09-12 18:51:58 INFO Timestamp of HEIC file: Sat Aug 7 09:43:52 2021 2021-09-12 18:51:58 INFO Setting timestamp of /home/user/iCloud/2021/08/IMG_6509.JPG to Sat Aug 7 09:43:52 2021 2021-09-12 18:51:58 INFO Converting /home/user/iCloud/2021/08/IMG_6507.HEIC to /home/user/iCloud/2021/08/IMG_6507.JPG /usr/local/bin/sync-icloud.sh: line 504: heif-convert: not found
  6. Hatte ein ähnliches Problem. Das Hauptproblem ist allerdings, dass du nicht nur bestimmten Docker-Apps den eth1 zuweisen kannst, sondern du nur allen Docker-Containern eine bestimmte Schnittstelle zuweisen kannst. Ich habe das Problem gelöst indem ich eine VM mit Ubuntu aufgesetzt habe, die ausschließlich über VPN nach außen kommuniziert. Von SpaceInviderOne gibt es dazu eine schöne Anleitung. In der VM lasse ich dann auch die paar Programme laufen, die über VPN kommunizieren sollen, halt nativ installiert und nicht als Docker-App. VM nutzt ebenfalls exklusiv eth1.
  7. Hast du da evtl. eine Guidance wie ich das anstelle bzw. auch einen Thread über Vor- und Nachteile? Stelle mich echt zu dumm an und finde nichts dazu im Board.
  8. Sorry dass ich den Thread reingrätsche, aber bist du dir sicher? Müsste es in meinem System mal messen, aber selbst die nervige und zum Glück kleine LED meiner 1650 Super ist aus sobald die VM auch aus ist. Daher vermute ich, dass die GPU bei mir nur Strom zieht, wenn sie auch für die VM gebraucht wird. Und genau das war auch der Grund warum ich unbedingt eine CPU mit interner GPU wollte, dass die iGPU als primäre GPU die Aufgaben von Unraid übernimmt und die PCIe-GPU exklusiv für die VMs bereit steht.
  9. I use icloudpd to automatically download all my icloud photos and videos on a daily basis. icloupd stores all my files in my share /mnt/user/icloud. I've set up PhotoPrism only to access this share with readonly-mode which means, that PhotoPrism will not move, copy, adjust or delete any of the files stored in that share. I want to have PhotoPrism only to have the files in that share "visible" and not to use any filebrowser or something else. Is there a way to automate the indexing? Currently PhotoPrism works quite well and is able to show all the photos I have in that share, but it does not update on a regular basis automatically. In the end I have to trigger the indexing manually to also include all new files downloaded by icloudpd during the week/days. Do you guys have any idea, e.g. cronjob / user script to trigger the indexation?
  10. So weit musst du nicht gehen. In unraid -> Tools -> Update OS -> bei Branch von stable auf next wechseln. Dann sollte sich nach kurzer Zeit die Beta melden mit Bitte zum Update. Es ist halt noch immer eine Beta, das muss klar sein - auch wenn ich bisher wenn nur kleine Fehlerchen entdeckt habe, die aber einfach zu umgehen waren.
  11. I set up my icloudpd this evening. It was a little bit trial and error but for now I guess it is working. It's currently downloading my more than 20.000 photos and videos. Although I used the parameter to convert HEIC to jpg, it downloads all the HEIC files - guess they will be converted to jpg afterwards and 'not on the fly'. At the very first start, it opened the icloudpd console by using the unraid docker web if and started the sync-icloud.sh manually. As I did not realize that there was no .mounted file in my mounted folder, I tried several things including executing the icloud and icloudpd file in /usr/bin/ which focusses on generating the keychain and 2FA cookie. This was all by accident, but then the cookie has been generated Anyway, I executed the sync-icloud.sh again and again because I still have not realized that I don't have a .mounted file. Took another 30min to realize and touch this file Regardless of the parameters, it seems that the tool analyses all pictures / videos uploaded to iCloud. In my case this took about 30min and then the download started. I also added a few parameters. Looks good so far, let's see whether the HEIC to jpg conversion will be executed after the download has been finished (in approx. 10 hours). Attached my docker image settings. Of course there are also a username / password, I've deleted these entries just for the screenshots.
  12. in der XML View einfach die paar Zeilen von <graphics> .... bis </video> löschen und schon sind die letzten Codeschnipsel von VNC entfernt. Deine Grafikkarte wird dann nicht nur wieder angezeigt sondern auch als primäres Grafikgerät gewählt - solange bis du im Profil wieder VNC hinzufügen solltest.
  13. Ja kann man problemlos, allerdings erst ab v6.9 die aktuell als Beta-Version angeboten wird.
  14. Thanks for this hint. With this adjustment in xml I was able to launch Genchin Impact. Unfortunately this setting does not allow to launch Forza Horizon 4, Need for Speed Heat and FIFA 21 as all of them crash due to insufficient hardware specs. Went back to the old xml.
  15. Yes you can keep using VNC for remote access, but VMC does not make use of hardware acceleration by your GPU.
  16. Basically it is AMD GPUs = compatible with every macOs Nvidia GPUs = compatible up to High Sierra using webdriver from my perspective, I never was able to get Nvidia GPUs running stable on macOS, regardless of macOS version or GPU.
  17. 1. not intenionally, but at the end - yes 2. at least one HDD and the USB stick has to be with unraid directly - all other components can be passed through to the VM. At the end I would not passthrough your drives but rather have a big virtual VM disk and several shares the VM can access. Makes it way less complicated to set up and including a parity drive you have kind of backups. 3. Basically you can but it is recommended to have the core 0 and 4GB of RAM remained for unraid. But also for passing through all cores, I never experienced any issues. 4. The server / unraid itself can be accessed by any browser. Your VM can be remotely accessed by using VNC - which does not make fun at all or by any other virtualization e.g. splashtop, teamviewer, anylink, parsec, etc. But to make the best use of your VM, add a monitor / TV and peripherals to your server directly and enjoy the zero lag performance. 5. you can make mac os think the vm is any mac. It's just a setting in clover configuration. 6. should be done by unraid itself but yes, a "raid 0"-like big share considering both SSDs is possible 7. in unraid a share has to be created and assigned to your drive. by share creation you can select that this is a timemaschine share. Then of course you can use this share as a regular drive for TimeMachine Backups. That's the way I also do it for my new macOS VM on unraid as well as my MBP. Additionally you can set up a periodical backup of your VM itself in unraid (the full image file including config files) - of course depending on the VM disk size and the free space on your unraid drives
  18. Finally i also got my mac VM working with macinabox. The key was not to adjust anything the XML using the GUI, although I used to consider the few lines at the bottom - that crashed my VM all the time. Then I figured out that once I adjust anything using the GUI, not only the custom code will be gone, this will basically change everything (model types, bus assignments, etc.). Then I decided to keep the preconfigured xml as it is and only add my lines for passthrough dedicated graphics and keyboard as well as change the assigned RAM and CPU cores directly in the xml view. My steps (used at the end High Sierra at is has most compatibility with Nvidia): 1. Setup the VM by using macinabox (awesome docker, mate @SpaceInvaderOne!!) 2. ran through the installation process of High Sierra using VNC 3. in macOS started Clover Configurator 3.1 mounted the EFI and loaded the plist 3.2 SMBIOS -> changed to iMac 18,1 and refreshed UUID and serial + ensured that the UUID is not in use by someone 3.3 Sys Parameters -> activated Nvida Webdriver 4. installed TeamViewer and activated unattended remote access 5. Turned off the VM 6. edited VM -> Switch to xml view 6.1 increased memory for both lines to 8388608 (8GB RAM) 6.2 added two more cores (number of cores entries in lines 14 and 31 have to be increased to 4) 6.3 adjusted the maschine to the latest pc-q35-5.0 (i'm already on 6.9b25) 6.4 changed the model type for my internet bridge to e1000-82545em as the type in standardprofile did not allow to login to Apple services (e.g. iCloud) 6.4 copied the four entries starting with <hostdev [...] until ending with </hostdev> from my Win10 VM. The four entries are my USB devices I want to pass through as well as my Nvidia GT1030 (video + sound) 6.5 inserted the entries before line </devices> almost at the bottom of the xml 6.6 deleted the entries <graphics .... and <video ... as they are one necessary for VNC 6.7 saved the xml and bootet the VM 7. as picture on my TV (attached to the GT1030) looked as distorted as expected, I switched to TeamViewer 8. opened the Terminal, pulled and installed the nvidia driver by using the command posted by @SpaceInvaderOne bash <(curl -s https://raw.githubusercontent.com/Benjamin-Dobell/nvidia-update/master/nvidia-update.sh) 9. rebooted and done. At the end it was a lot of trial and error and especially editing the xml using the unraid GUI which everytime crashed the VM completely cost a lot of time and effort. But as soon as I realized not to touch the GUI at all and just edit the xml directly, it went smooth. What I forgot to mention: After step 5 I made a copy of the xml file and created a new VM using this xml file (with adjusted VM name and UUID) to have two VMs with the same disk. One low-end VM for VNC (2 cores + 4gb RAM) and one more powerful for my TV (4 cores + 8gb RAM + dedicated graphics). Fun fact: my 400 EUR "NAS" virtualizing macOS has more performance in macOS with 4k than my 2018 1600 EUR MacBookPro does with 2k Retina. Coming to the bad news: I did not manage yet to have audio (although passed through with my nvidia graphics) and bluetooth (onboard intel bluetooth chip; also passed through -> same way as in my Win10 VM for which it works perfectly) . Do you guys have any idea? Thanks!
  19. Hi, I am using my GT1030 dedicated card for my Windows 10 VM. This card has one HDMI connection. Now I want to add another Screen which would require a second HDMI connection. my board has two additional connections for the iGPU. The integrated GPU is used by Emby server. When I use the board hdmi, all I can see is the shell of unraid itself. Is there a way to make use of one of the two board HDMI-connections and make it to put out the video of the Win10VM? PS. I cannot also pass through the iGPU to win10 as it is needed by my dockers. The other way round, as the GT1030 does not support venc, I cannot use this for my emby server. thanks!
  20. Hey there, my Ubuntu 20.04 VM updated a few driver including Bluetooth (built-in) as well as Logitech unyfing receiver. Both are directly passed through as USB devices. After updating, my keyboard was not usable anymore and the only option was to hardreset the VM. The consequence is, that neither my Logitech Unyfing Receiver nor my Bluetooth-chip can be seen anywhere. I cannot chose them in the VM setup anymore, no VM is able to start anymore as it says that USB device x and y are missing. Even in the system information they are not there anymore. Did the f****ng ubuntu update crashed my devices? As a sitenote, my unRAID system took approx. 90-100 seconds to start (power on to logon to web frontend) since the devices are gone it takes about 5min to start.
  21. Hey, my Win10 VM works pretty well but always lacks of free storage capacity. Since I had one SSD left, I decided to dedicated the SSD to my Win10 VM by passthrough the drive. Using unassigned devices, the SSD has enabled passthrough but nothing else. In my Win10 VM template, I added the SSD as second vdisk by using manual path to the drive itself (/dev/disk/by-id/ata-Samsung_SSD_850_[...]) with bus SATA. Everytime I boot my VM, firstly I am not able to access the SSD. Which is annoying as I currently use this SSD for my Steam Library, Win10 Downloads folder, etc. It's not about that Win10 does not have access to it, as executing the partition manager, I can see the SSD and am able to mount and assign a drive letter to it to make it usable under Windows. But without firstly opening the partition manager and manually "mount" the drive, it is not accessable. Did I miss any configuration? Thank you for your support! this is what the log says -blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_S2CKNXAG916413Z","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \ as well as the template <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='2'> <name>Windows 10</name> <uuid>98e50835-0873-309a-3d35-8aba9276156e</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-5.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/98e50835-0873-309a-3d35-8aba9276156e_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' dies='1' cores='2' threads='1'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_S2CKNXAG916413Z' index='1'/> <backingStore/> <target dev='hdd' bus='sata'/> <alias name='sata0-0-3'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:a9:8a:04'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x08' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Windows 10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/domains/vBIOS/GP108-mod.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='1' device='2'/> </source> <alias name='hostdev2'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x8087'/> <product id='0x0aa7'/> <address bus='1' device='3'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  22. Yes, even with the cheap ArcticFreeze, the CPU keeps approx 10 degrees cooler (according to tests I've read, never tested by myself) and is more quiet than the stock one. For RAM, it depends of the usage of your system. I use 8gb RAM for my VM(s) and the other 8gb for unRAID itself (including docker container; unraid + dockers utilize approx. 5gb RAM in my setup). When using unRAID only as typical NAS, 8gb is fair, but once your system has the power to do more, I'm pretty sure you will do more There's always room for more RAM btw as you mentioned the 6 SATA ports for the H370m-ITX/ac board. When you use the m.2 slot, the first SATA port will be disabled as it uses the same bus. So you are only able to use six internal drives, regardless of 1x m.2 and 5x SATA or no m.2 and 6x SATA. However, this board is the only mITX board which comes with 6 drive ports for a reasonable price - that's why I also chose this one.
  23. Similar to my specs and it just works great. I would go with a WD red (m.2) SSD for cache as it is created for 24/7 and comes with 5yrs guarantee. Same speed (SATA3 6GB/s) by 10-15 EUR more. I also have the 1TB cache, but I've never used more than 300gb so far ("daily use" shares are on my cache prefered, archive has status "yes" for cache usage with daily scheduled mover at midnight). So I'd rather invest the bucks in an additional 8GB RAM stick and go with a 500gb cache. For chilling I use a ArcticFreeze 11 LP topblower (15 EUR). Small and runs by 1100-1300rpm and my CPU has not yet seen a temperature above 50 degree, even when using my Win10VM and playing some games. System overall is absolutely silent, you only can hear the case fans when getting to the Node304 case by 30cm. The ArcticFreeze quite wide, but using the same ASRock H370M-ITX/ac with two RAM sticks and there's still 1mm in between sticks and cooler. @SirReal63running a video with plex / emby / etc. which requires transcoding always pegs the max out of a CPU. With enabling the hardware decoding using iGPU or dedicated GPUs it will reduce CPU power to a minimum. My emby server takes the advantage of the integrated GPU of my i3-8100 an even transcoding two 4k movies in parallel pegs my CPU at not more than 30%. btw. plex / emby also use transcoding when the movie has embedded subtitles and / or your client does not support the audio codec (e.g. DTS or EAC3 with Amazon FireTV Stick).
  24. Thanks for this app! Works quite well even with 6.9.0b22
  25. Hi, is there a way to sync various shares of my array to my external SSD which I have mounted as unassigned devices? Ideally including a script which synchronizes the shares to the SSD automatically on a regular basis (e.g. weekly). The reason is, that I want to have a) a backup solution for my shares I have on the cache drive (prefer option) but also to have some files on array-shares on a "to-go-drive". I thought about using rsync, however I cannot find it in the Apps. Thanks!