Jump to content

SpaceInvaderOne

Community Developer
  • Posts

    1,747
  • Joined

  • Days Won

    30

Posts posted by SpaceInvaderOne

  1. Hi, @slimshizn thanks for posting that.

    looking at your iommu groups you do not need to use the PCIe acs override patch for your iommu. the grouping is fine how it is. Also if you don't get an error when starting the VM then you don't need to enable the unsafe interrupts either. So that's good news.

     

    I think your problem of the crash is related to the USB controller: Fresco Logic FL1100 USB 3.0 Host Controller

    If I remember @CHBMB discovered that this chipset doesn't play nicely with supermicro motherboards and would cause a crash.  Please try removing the USB card from the server and do some testing and see if that resolves your problem.

     

    Edit --  found post where CHBMB discussed this findings 

     

  2. Hi @slimshizn   sorry to hear that you are having problems with your server.

    Reading through your post I was a little confused reading about the MSI interrupts. I am guessing that you are adding to your syslinux config file/setting in vm settings and allowing unsafe interrupts. Unsafe interrupts are used when your server doesn't support interrupt remapping, this is different to MSI interrupts in the VM. I know too many bloody interrupts gets confusing :)

    The MSI interrupts are enabled in the os and are normally used when the sound isn't working correctly and it breaks up a bit. Its commonly known as demonic sound. But this wouldn't help you using this in your vms when it comes to the vm and the crashing.

     

    So to troubleshoot, lets start from the beginning. Can you try the following, please? Remove all of the custom settings in settings/VM Manager (and or syslinux config if you manually added things)

    so both PCIe ACS override and VFIO allow unsafe interrupts set them both to disabled and reboot your server.

    Once rebooted please go to tools/system devices and copy your PCI Devices and IOMMU Groups into this thread so we can see them.

    That way we can see what gpu you have and its natural iommu grouping.

    Then with your gpu passed through to the VM (and its sound counterpart) start the VM. If you get an error copy that error and also paste it into this thread.

    Then we can advise you on what settings to try next. :)

     

     

     

     

  3. @Josh.5  yes, having the SCHEDULE_FULL_SCAN_MINS set to 0 and it not scanning the library at all, would be great. Most of the time I would just like to point the container at my media and it only convert new things as they are added.
    However, having the ability to disable the inotify watcher in the template as well may be useful too. I was thinking of using 2 instances of the container. One which would run 24/7 doing only the inotify watcher encodes (probably just limited to using a couple of cores) and the other instance, I would have user scripts start at night and stop in the morning working on only the library encodes but using all the available cores.  

    Yes, using the ram for the cache files is a good idea but I guess one would have to keep the worker threads low otherwise it would use too much ram.

    I was wondering what the use case of using multiple worker threads beyond one for each library location and one for the inotify watcher would be. I did notice that when running the container on 8 cores it would just about max out those cores. But running it on 16  cores only about 45% of each core is being used. So I guess having more workers would saturate higher core counts? Or is there another use case?

     

    Oh also, I forgot to ask before would it be possible to be able to adjust the various video encoding quality settings, please? (although the ones used seem to produce good results that I have seen)

     

    Anyway just want to thank you for all you work on this container its great

     

  4. Hi @Josh.5

     

    Great work. I have had a play with the container and it really nice. :)

    My thoughts

    1.  I can't get the container not to scan the library folder. Setting the variable to zero and when the container starts it will still start going

    through the files that are there. It would be good if it could be disabled for the scan to happen when the container starts.

    2. I am used to looking in the log on the handbrake container to see how far through an encode job is. It would be great if we could see this in the log too.

    3. If the container is stopped then the temp file is left. This isn't a problem if the container is started again and it does the same job again. However if for some reason it doesn't (maybe the source file has been moved or deleted)  it is left behind. Maybe when the container starts it should erase that temp folder.

    4. I agree with @trekkiedj multiple library folders would be great.

     

     

     

  5. Hi, @Josh.5 yes it sounds great. Please let me know when its ready. I would love to try your container. Sounds like a much more elegant solution than mine and much easier. It is annoying having to wait for conversion before watching.  Have you seen the unraid port of FallingSnow's H265ize? I had a quick look at that yesterday as it will convert whole folders full of video and was thinking of using it to convert some of my existing media. But your container sounds like it takes that one step further, can't wait to try it.

  6.  The 16 -16 etc isn't really telling you the slot speed but what it is currently running at. To get the correct speed you really need to put a load onto the card.

    It is into do with power saving of the gpu when not being used.

    It quite easy to see this in a windows VM using gpuz. You will see the speed there that the card can run at under bus speed. Hover over that and it will tell you the speed the card is currently running. Then to the right if you click on the question mark you get the option to put some stress on the gpu and you will see the number change then.

     

    • Like 1
  7. Hi Guys been thinking about how to get sonarr and radarr to auto re-encode downloads automatically using handbrake.

    There will be 2 parts to this and i will post the next under this one in a few days time.

     

    This video  the first part  shows how to automatically re-encode video files downloaded by sonarr or radarr using an advanced docker path mapping sending the media files through handbrake first before then being processed by sonarr/radarr.
    I use h265 as i want smaller video files but any format can be choosen. This first part goes through the principles of how this works. The second video will go deeper using detailed examples. It is recommended to watch this video first.
     

    PART ONE

     

    PART 2

     

    • Like 6
    • Upvote 3
  8. 1 hour ago, scud133b said:

    I'm still stuck trying to provision the certificate. Getting the exact "timed out" error that @SpaceInvaderOne says is most likely caused by firewall issues. I've set the port forwarding in my router exactly how @SpaceInvaderOne describes in his tutorial, and I have the Let'sEncrypt container config set to the same ports: https://imgur.com/a/6fvhKWy

     

    I'm using the duckdns container and I've already confirmed that it has been updating correctly.

     

    Any ideas where to start troubleshooting this?

    Probably the problem is due to your isp blocking port 80, which some do.

    Because of this HTTP authentification will fail. 

    But you can work around this but you will need to buy your own domain.

    Then sign up for a free Cloudflare account and add your domain to it

    You would point your own subdomains (using cname) to your duck DNS (example nextcloud.  sonarr.  radarr.  yourdomain.com to myserver.duckdns.org)

     

    When this is set up you would then change the template for lets encrypt to use DNS authentification and Cloudflare like this.

    cloudflare.thumb.png.39a6d31f5c16dd7d1ef3bc6191639446.png

     

    Then you will need to goto your appdata share then letsencrypt and the folder dns-config

    Here you will find a file called cloudflare.ini  in this file you will need to put your email address which you used to sign up for cloudflare and also your cloudflare api key.

    Once you have done this, restart lets encrypt and it will validate and generate the certs that you need.

    Hope that helps.

  9. 5 hours ago, dgwharrison said:

    Hi so my dockers are getting IPs the way they're supposed to, 172.17.0.2 for example. However they don't seem to be able to communicate with each other by hostname so I can't get sonarr to talk to sabnzbd by any other means than IP. The problem is when the dockers are restarted, be that the system or manually, they're given new IP addresses, reusing the old ones.  So say I tell sonarr to find sabnzbd on 172.17.0.2, after a reboot, it might be 172.17.0.3 and I have to manually reconfigure.

     

    This sucks.  I've not used docker a lot, but I last time I did I recall being able to use host names.

     

    I surely can't be the only one with this problem, so I figure I've missed something. Any ideas?

     

    Is there any reason why you need to have the same ip assigned to a container or to connect through hostname each time it starts.

    For most purposes you would use the servers ip address and the port which the container is using to access the container.

    For example for me nzbget is on port 6789. So to access it i type the ip of my Unraid server (10.10.20.199) the the port

    So 10.10.20.199:6789

    1902673623_ScreenShot2018-11-08at17_53_19.thumb.png.d1d50642746bab495c01aa75f5947fff.png

     

    So if i wanted to configure sonarr to talk to nzbget i would tell it to use that ip as that mapping will not change.

     

    855570035_ScreenShot2018-11-08at17_45_52.thumb.png.d98ce87ce3dc3850998e46c408a64f35.png

     

    However there may be use cases where by you might want to have containers talk by host name. For example using a reverse proxy

    it is easier to use hostnames.

    To do this you have to use a custom docker network. I showed how in the video i made on letsencypt reverse proxy here that you may want to check out.

    https://youtu.be/I0lhZc25Sro?t=688

     

    • Like 1
  10. 15 hours ago, xerox445 said:

    Everytime I try to create this VM, I am booted to the UEFI Interactive Shell V2.2...any ideas? I am just stuck at a command line.

    If you are dropping into the UEFI shell then it is most likely a problem with the install media.

    It is trying to boot but not finding any UEFI compatible media to boot from. This could be one of 2 things

     

    1. (most likely) that the win 10 media isn't UEFI compatible so doesn't boot.

    I have had old windows 10 images I downloaded a while back and they aren't UEFI compatible. So get a new image, try downloading Windows 10 directly from the Microsoft website here https://www.microsoft.com/en-us/software-download/windows10ISO 

     

    2. The VM template is trying to boot from the vdisk first. So the boot order means the windows 10 install media isn't being booted from. Its trying to boot from the vdisk and cant so dropping into UEFI shell. So delete the windows template that you are using now and start from scratch and start fresh to avoid this.

  11. On 10/26/2018 at 12:39 AM, RSQtech said:

    I dont really get what i have done wrong.... any advice?

     


        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/MacOS Mojave/clover.qcow2'/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
       

    2

    The problem is is that the disk image clover.qcow2m is well a qcow2 image :) MacOS VM will boot a qcow2 image fine.

    But it seems like you forgot to set the disk type as qcow2 for this one and it is expecting a raw image.

    Easiest way to change it (as you cant change it in the template now) is just to change the xml from this.

    <driver name='qemu' type='raw cache='writeback'/>

    to this

     <driver name='qemu' type='qcow2' cache='writeback'/>

    Then it will boot fine and not hit the efi shell

     

  12. Um did a bit of testing tonight. Not good news for us AMD guys.

    Forgot that AMD virtualisation is svm not vmx.

    So svm would need to be passed on amd cpus for that to work

    However, running 

    sysctl machdep.cpu.features

    on osx it doesnt show svm as present. Strange as if i do the same on a debian vm then run  

    cat /proc/cpuinfo

    I can see svm passed even through I am running that vm also with emulated penryn same as my osx vm

     

    Thing is I dont think that docker will ever work on a vm hackintosh using an AMD cpu. 

    I think it will always look for vmx (not svm) because apple never thinks an AMD cpu and thus SVM will be present. So it will fail and not load complaining about the cpu.

     

    I dont have any intel hardware here to test with to see if docker would work on an intel based vm hackintosh with vmx & rdtscp passed. I think it should.

     

    Maybe @1812 , please if you have time could you on your intel vm hack try using

    <qemu:arg value='Penryn,vendor=GenuineIntel,kvm=on,vmx,rdtscp,+invtsc,vmware-cpuid-freq=on,'/>

    then run 

    sysctl machdep.cpu.features

    and see if vmx is listed. Then if so check

    sysctl kern.hv_support

    and see if you get a 1. If so does docker work on osx?

  13. Hi @dashtripledot   yes you "should" be able to do this using penryn bu adding the needed features?

     

    first, you will have to make sure nested virtualisation is enabled on the Unraid server.

    I believe it is enabled in the latest Unraid by default. Older versions I am not sure

    You can check by running

    for AMD cpus

    cat /sys/module/kvm_amd/parameters/nested

    for intel

    cat /sys/module/kvm_amd/parameters/nested

     

    If you get a zero its disabled and if you get a 1 then its enabled

     

    If you get a zero then you can enable by unloading the module (also make sure all vms shutdown first)

    amd

    modprobe -r kvm_amd

    or intel

    modprobe -r kvm_intel

    then running

    amd

    modprobe kvm_amd nested=1

    or intel

    modprobe kvm_intel nested=1

    ------------------------------------------

     

    Then you will have to pass through these CPU features to the XML line that you quoted -  vmx and rdtscp

     

    so that line would look like 

     

    <qemu:arg value='Penryn,vendor=GenuineIntel,kvm=on,vmx,rdtscp,+invtsc,vmware-cpuid-freq=on,'/>

    You can add any features to the cpu that the host has in this line to improve the cpu in the MacOS guest.

    This will improve the performance

    <qemu:arg value='Penryn,vendor=GenuineIntel,kvm=on,vmx,rdtscp,+invtsc,+avx,+avx2,+aes,+xsave,+xsaveopt,+ssse3,+sse4_2,+popcnt,vmware-cpuid-freq=on,'/>

     So either of the 2 above should make docker work in the MacOS vm.

    I

    • Upvote 2
  14. 5 minutes ago, Squid said:

    Remind me next weekend

    Great thanks. Will do.    .........Alexa set reminder.......whats the reminder for? .....to remind @Squid ...... when should i remind you? .... Next saturaday.....what time?.....10 oclock.....is that 10 oclock in the morning or the evening?.....the morning.....ok i will remind you to remind squid at 10 oclock on saturday......thankyou Alexa...

     

    Ok lets see if bloody Alexa will work !! 😁

  15. Hi @Squid is there any chance you could make an option so that we could have a custom tab just open over the parent window. i would really like to have

    a link from each of my servers, to the other server's webGui like this. Then i could easily switch from server to server from with the Unraid webGui

    1603169786_ScreenShot2018-09-30at22_26_06.png.c064bc1ebd8c6d0b132175e422e0bea2.png

     

    😀Please! 😀

    • Upvote 1
  16. Hi Guys. This video is a tutorial on how to examine the topology of a multi CPU or a Threadripper server which has more than one numa node. This is useful so we can pin vCpu cores from the same numa node as the GPU which we want to pass through so therefore, getting better performance. The video shows how to download and install hwloc and all of its dependencies using a script and @Squid  great user script plugin. Hope you find it useful :)

     

    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    **note** If using the new 6.6.0 rc versions of unraid (or above)**

                 before running the lstopo command you will need to create a symlink using this command first

     ln -s /lib64/libudev.so.1 /lib64/libudev.so.0 

    **         dont run the above command unless on unraid 6.6.0 or above!

    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    EDIT--

    Unraid now has lstopo built in! You will need to boot your server in GUI mode for it to work. Then once you are in GUI mode just open terminal and run the command and you are good to go. Much easier than messing with loading it manually like in my video.

     

     

     

    • Like 7
  17. 7 hours ago, Enver said:

    Hello @Gridrunner.

     

    I have successfully port forwarded the services, mapped the CNAME's to the DNS record in DuckDNS and also verified the certificates are working as per logs. I get the NGINX landing page for both Binhex-Sonarr and Binhex-radarr when I attempt browse to the CNAME's. So quite obviously DNS, CNAME and port fowarding are working. I have double check both config  (sonarr.subdomain.conf and radarr.subdomain.conf) files, they appear to be OK. See below extract:

     

    # make sure that your dns has a cname set for sonarr and that your sonarr container is not using a base url
    # to enable password access, uncomment the two auth_basic lines

    server {
        listen 443 ssl;

        server_name sonarr.*;

        include /config/nginx/ssl.conf;

        client_max_body_size 0;

        location / {
            auth_basic "Restricted";
            auth_basic_user_file /config/nginx/.htpasswd;
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_sonarr binhex-sonarr;
            proxy_pass http://$upstream_sonarr:8989;
        }
    }
     

    Please advise.

     

    Enver

    13

    Hi @Enver

    Firstly are sonarr and radarr on a custom user defined docker network?

    I see that from the above config that you have removed the hash tags before the auth_ parts to use a password.

    have you created that in the container itself by running

    htpasswd -c /config/nginx/.htpasswd <yourusername>

    personally i didnt have luck using both .htpasswd file with sonarr and sonarr's own password system it just didnt seem to work for me. Not sure why? But i am sure the devs here could shed light on as to why. So i only use sonarr's password in that container without .htpasswd. I should try and get it working without the htpasswd first then add that later when you are sure it works fine.

  18. 19 minutes ago, Froger said:

    I got stuck at creating custom network proxynet. It looks like everything went well with creating it in terminal but somehow letsenctrypt is not seeing that network. Any hints ?

    Are you running the latest unRAID. You will only see it in the dropdown from 6.5.1 onwards. For older unRAID builds you will have to goto advance settings then manually enter into the extra parameters like this. 

    --network=[networkname]

    I would upgrade to the latest stable unRAID unless there is any reason that you must stay on the older one.

  19. 41 minutes ago, 1812 said:

     

    so even if it's enabled on the ports that are forwarded, I'm looking for a general "allow nat reflection" or similar, correct?

     

     

    Found the setting finally :Firewall: Settings: Advanced--- Automatic outbound NAT for Reflection

     

    thanks!

    How do you find opensense? I havent tried it. I know its a fork of pfsense. Any reason you use it instead of pfsense?

×
×
  • Create New...