Jump to content

caplam

Members
  • Posts

    352
  • Joined

  • Last visited

Posts posted by caplam

  1. i have a question about the same topic.

    I setup plex to use /transcode for transcoding.

    I mapped /tmp/plex on the host to /transcode.

    Yesterday i run a 4Kmovie which needed to be transcoded. I saw, in the /transcode/sessions directory of the container numerous chunks of m2ts files.

    During this the docker.img was filling up.

     

    Today i change the mapping to /tmp -> /transcode

    when i run a movie i see on the host the directory /tmp/Transcode/Sessions which contains a subdirectory :plex-transcode-oqrnbokfeqtz9q91j7lx3oi1-fc7b00aa-1215-4c0a-8150-147ff5696762

    Why the hell is it in /tmp/Transcode and not /tmp/transcode ? Is Transcode directory hardcoded ? 

    At least this time it transcodes in ram.

  2. Hi everyone,

     

    I've setup a catalina vm using macinabox. Thank you spaceinvaderone for that stuff.

    I have some problems.

    1- when editing xml some things change automatically : the path of ovmf files which i set in to /mnt/user/domains/catalina

    when i edit the xml it changes to something in /usr/share/... and /etc/libvirt/... for the other. When i start the vm with this path the display becomes weird and it enter in a bootloop. So i have to change the path every time i edit the xml. 

    Apparently it's normal; forget that

     

    2- i have no audio device in the vm. I try with or without passing through the host soundcard.

    3- for now i have no gpu to passtrough. I use realvncviewer as standard vnc from unraid can't connect. I tried nomachine but keyboard mapping is buggy and it connects once in ten times. I also tried splashtop but it eats cpu and the vm is unsable.

     

    here is my xml file:

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='15' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>MacinaboxCatalina</name>
      <uuid>ff4e1131-1ab6-4271-832e-2b2aebc48432</uuid>
      <description>MacOS Catalina</description>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="default.png" os="Catalina"/>
      </metadata>
      <memory unit='KiB'>16777216</memory>
      <currentMemory unit='KiB'>16777216</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>8</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='2'/>
        <vcpupin vcpu='1' cpuset='18'/>
        <vcpupin vcpu='2' cpuset='3'/>
        <vcpupin vcpu='3' cpuset='19'/>
        <vcpupin vcpu='4' cpuset='4'/>
        <vcpupin vcpu='5' cpuset='20'/>
        <vcpupin vcpu='6' cpuset='5'/>
        <vcpupin vcpu='7' cpuset='21'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader>
        <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='4' threads='2'/>
      </cpu>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/cache/domains/MacinaboxCatalina/Clover.qcow2' index='3'/>
          <backingStore/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <alias name='sata0-0-2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/cache/domains/MacinaboxCatalina/Catalina-install.img' index='2'/>
          <backingStore/>
          <target dev='hdd' bus='sata'/>
          <alias name='sata0-0-3'/>
          <address type='drive' controller='0' bus='0' target='0' unit='3'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/mnt/cache/domains/MacinaboxCatalina/macos_disk.qcow2' index='1'/>
          <backingStore/>
          <target dev='hde' bus='sata'/>
          <alias name='sata0-0-4'/>
          <address type='drive' controller='0' bus='0' target='0' unit='4'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <alias name='usb'/>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <alias name='usb'/>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <alias name='usb'/>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'>
          <alias name='pcie.0'/>
        </controller>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x10'/>
          <alias name='pci.1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x11'/>
          <alias name='pci.2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0x12'/>
          <alias name='pci.3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0x13'/>
          <alias name='pci.4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:98:7c:28'/>
          <source bridge='br0'/>
          <target dev='vnet3'/>
          <model type='vmxnet3'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/3'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/3'>
          <source path='/dev/pts/3'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-15-MacinaboxCatalina/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <alias name='input0'/>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'>
          <alias name='input1'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input2'/>
        </input>
        <graphics type='vnc' port='5903' autoport='yes' websocket='5703' listen='0.0.0.0' keymap='fr'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <alias name='video0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>
        <memballoon model='virtio'>
          <alias name='balloon0'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </memballoon>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
      <qemu:commandline>
        <qemu:arg value='-usb'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='usb-kbd,bus=usb-bus.0'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='isa-applesmc,osk=redacted'/>
        <qemu:arg value='-smbios'/>
        <qemu:arg value='type=2'/>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+pcid,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check'/>
      </qemu:commandline>
    </domain>

     

  3. For now it's working as intended. I'm waiting for a quadro p400 to accelerate the process.

    I have a question: how would you do to have tdarr reencode hevc into hevc if the bitrate is higher a certain value. For now it's converting my h264 media files in hevc.

    I have also some big hevc files and i'd like to have it downsized. 

    For now tdarr doesn't touch hevc files.

  4. I think you have found answers in the forum. But in short you can do that with unraid.

    You like to live dangerously: a raid 0 for your media files. 

    For example my unraid server runs 5 or 6 vms, 45 docker containers and handle my media libraries. I suggest you to read the forum to know how to properly setup your server.

  5. i've had the same kind of weird thing yesterday.

    The first symptom was: no more display of the cpu load on dashboard

    Then : bad gateway error when trying to open webterminal

    i also couldn't run some commands like : change a share propertie, stop docker engine, stop vm-engine, reboot, stop the array

    When i clicked on the command the script is launched and took 100% of 1 core.

    I ended up with powerdown -r. After reboot it was ok. I had chance not to have a problem with disks.

     

    edit: i could not stop all containers but i could stop them one by one. Same thing for the vms.

  6. i think i will go with a quadro p2000 as i'd like to preserve slots on mobo. The p2000 is a single slot high.

    Inno3d made a gtx1050ti single slot but i can't find one at a good price.

    So i have to find a gpu compatible for passing trough to a mac os vm and i'd rather find a single slot gpu.

    The HP Z620 has not much extension slots.

  7. for what i've read yes you can.

    The difficulty is to choose the right gpu.

    For plex you'll want a nvidia gpu. 

    For the vm it depends on the vm type. I have also a thread for finding the right gpus. 

    But i've read you don't want twice the same gpu.

    For now i retain a GTX 1050ti for using with dockers (plex and tdarr)

    But for the second i don't which gpu to choose.

  8. On 2/16/2020 at 9:12 PM, Jeff in Indy said:

    I’m going to build one in a week or two, and I’m getting the Fractal Design Node 804. Instead of being really tall, the MB is one side, the Power and drives are on the other side.


    Sent from my iPhone using Tapatalk

    i think he looks for a das case (no mobo).

    I'm planning adding a das too. And i can't have a rackmount case. So far the best option i've seen is the silverstone ds380.

    You need a bracket to plug sas cable. If you use only the 8 hotswap bays you can directly use sas cables. 

    If you plan to add 4 more ssd (or 2,5" drive) you have to use an expander. 

    You also have to find a power control board to power on the atx power.

  9. Hello everybody,

    I have an unraid server running on a hp z620 (2xE52650 v2 total 32 threads with 128Gb ram)

    It runs 6 or 7 vms (4 debian, 1 W8, 1 or 2 mac os high sierra and catalina)

    It also runs 45 docker containers.

    For now i have an old nvidia quadro nvs280.

    As i recently installed tdarr container to reencode my whole library i plan to add a pascal gpu (probably a gtx1050 ti to save my budget). This gpu will be used by tdarr and plex containers for transcoding purpose. Right now, with only cpu encoding i'll have to wait 4 months before my library will be reencoded. So i hope to accelerate that and have the ability to transcode with plex (not a big need as most of my clients can read hevc streams). 

    For what i have read the gpu can be shared with several containers and nvidia choice is not questionnable.

    The second part of my problem is the macos vm.

    My unraid server is in the basement. I work in my office on 2012 mba which i like but it's sometimes running out of ram and cpu power. So i would like to transfer my desktop environnement to a vm. For now it's working with splashtop streamer or nomachine but it's not really fluid so i don't really use it.

    So i have questions about the config.

    Would i be able to access remotely and flawlessly my mac os vm with a dedicated gpu ?

    If so what gpu do i choose ? Would an amd one be a better choice ?

    Would it run with a nvidia one (and lsio nvidia driver ) without problem?

    I'm not a gpu guy so i know almost nothing about it espcially in mac os.

    And i was about to forget, do i need 1 or more dummy dp or hdmi plug ?

    Thank you in advance for your answers.😀

  10. 24 minutes ago, HaveAGitGat said:

    Yes, any card which supports NVENC will work with Tdarr:

    https://developer.nvidia.com/video-encode-decode-gpu-support-matrix

     

    P400 would be fine for me. I use a 1050Ti. Those cards perform the same for transcoding but the P2000 has unlimited concurrent transcodes.

     

    NVENC still uses lots of CPU so you probably wouldn't want to use 2 separate transcodes on the CPU. You can do that though.

    I have an old machine but it can run 4 transcodes with cpu. I have 2x2650V2 with 128Gb ram.

    I wonder how fast would be a p400 compare to that?

  11. After the night it seems to run quite well. Like unmanic it transcode almost 4 files at a time in an hour with the cpu. I have around 3000 files left to transcode which should take around 4 months.

    I'm looking a gpu (for now i have an old nvidia quadro 280 which is useless) to accelerate the process.

    The P400 is not expensive and draw around 30W. It can transcode 2 streams simultanously but i don't know at what speed.

    The P2000 is more expensive and draw 75W. 

    Do this 2 cards work with tdarr ?

    What would you choose ? 

    When my library will be transcoded i will not need more than 2 transcodes at a time.

    Can tdarr transcode 2 files with the gpu and 2 others with the cpu at the same time ?

  12. Hello,

     

    i've just installed the container and i'm trying to understand how it works.

    My purpose is to reencode all my video fileswhich are not in hevc in hevc and let audio and subtitles streams untouched (passtrough) 

    I created my first library. It's a movie one which was reencoded last mont with unmanic.

    As i don't want to break my library i first run health check with 4 workers. it was ok.

    For the setup of the library :

    source : ok the folder is correctly mapped and i can select the movie subfolder

    transcode cache : i selected /tmp/Tdarr for mapping : the cache folder will be in ram (i have plenty)

    output folder: i let it untouched. In my understanding it means transcoded files will overwrite original ones.

    containers : i removed m2ts from the list : in a bluray there are hundreds of m2ts i don't want to convert it all.

    transcode : i checked plugin stack

    i checked Tdarr_Plugin_lmg1_Reorder_Streams to have streams in correct order

    i checked Tdarr_Plugin_075a_FFMPEG_HEVC_Generic as i want only video encoding of non hevc files.

    Then i checked process library and run scan (new).

    I then select 1 transcode worker.

    I had a bug as 3 transcode worker were started. i killed 2; 1 is enough to see result.

     

    Waiting to see the result. For now i have a concern.

    The library has been scanned with 462 files. 7 are h264 videos and 455 are h265.

    Despite the transcode settings 205 files have been queued for encoding: why not only 7 ?

    In the search tab why i search for h264 it finds 9 files. It clearly indicates that 2 are hevc.

    When i look at the queue i see hevc files but i shouldn't.

    Do you know what might be the problem?

     

    When i click on skip in the queue it indicates transcode success. The files which were not in the queue are marked "not required".

     

    Edit: 30 minutes after the beginning tdarr deleted the 199 h264 files in the queue and marked them as transcode not required. I don't know why exactly.

  13. I destroyed my docker.img, currently redownloading containers. It's a pain in the ass with my slow connection.

    Nothing obvious about the size of the containers.

    Do you think upgrading the size on docker.img is the answer ?

    I have seen there is a bug in btrfs which explain the size listed in /var/lib/docker/btrfs/subvolumes

    When docker service was stopped and docker.img deleted, there was nothing in /var/lib/docker

    As soon as i started reinstalling containers the subvolumes directory started to fill. I think there may be several duplicates in here.

    Does it sound normal to you that i end up with subvolumes directory listed at 163GB?

     

  14. Hello,

    I read the docker faq but i'am still stuck. For what i have seen all path are mapped outside the docker image.

    My docker.img is 40GB large and used at 91%. It's growing.

    I have 47 containers (50 images).

    I searched without success which container was filling up the image.

    I don't understand how are organized docker volumes.

    For example in /var/lib/docker/btrfs/subvolumes i have 494 folders for a size of 163G.

    I suspect i have duplicate volumes but i don't know how to detect them.

    I have portainer installed and with it i deleted all unused volumes; i gained 1G (164G -> 163G).

    Where can i start ? 

×
×
  • Create New...