Jump to content


Content Curators
  • Content Count

  • Joined

  • Last visited

  • Days Won


SpaceInvaderOne last won the day on October 25

SpaceInvaderOne had the most liked content!

Community Reputation

298 Very Good

About SpaceInvaderOne

  • Rank
    The Artist Formerly Known as Gridrunner
  • Birthday April 12


  • Gender
  • URL
  • Location
  • Personal Text
    As you think, so shall you become.

Recent Profile Visitors

4951 profile views
  1. SpaceInvaderOne

    Shutting down VM causes unRaid to CRASH

    another thing i forgot to mention, make sure the onboard aspeed vga is set as the primary gpu in the bios of the mb. If it is it should output the Unraid consoles from its display.
  2. SpaceInvaderOne

    Shutting down VM causes unRaid to CRASH

    Hi, @slimshizn thanks for posting that. looking at your iommu groups you do not need to use the PCIe acs override patch for your iommu. the grouping is fine how it is. Also if you don't get an error when starting the VM then you don't need to enable the unsafe interrupts either. So that's good news. I think your problem of the crash is related to the USB controller: Fresco Logic FL1100 USB 3.0 Host Controller If I remember @CHBMB discovered that this chipset doesn't play nicely with supermicro motherboards and would cause a crash. Please try removing the USB card from the server and do some testing and see if that resolves your problem. Edit -- found post where CHBMB discussed this findings
  3. SpaceInvaderOne

    Shutting down VM causes unRaid to CRASH

    Hi @slimshizn sorry to hear that you are having problems with your server. Reading through your post I was a little confused reading about the MSI interrupts. I am guessing that you are adding to your syslinux config file/setting in vm settings and allowing unsafe interrupts. Unsafe interrupts are used when your server doesn't support interrupt remapping, this is different to MSI interrupts in the VM. I know too many bloody interrupts gets confusing The MSI interrupts are enabled in the os and are normally used when the sound isn't working correctly and it breaks up a bit. Its commonly known as demonic sound. But this wouldn't help you using this in your vms when it comes to the vm and the crashing. So to troubleshoot, lets start from the beginning. Can you try the following, please? Remove all of the custom settings in settings/VM Manager (and or syslinux config if you manually added things) so both PCIe ACS override and VFIO allow unsafe interrupts set them both to disabled and reboot your server. Once rebooted please go to tools/system devices and copy your PCI Devices and IOMMU Groups into this thread so we can see them. That way we can see what gpu you have and its natural iommu grouping. Then with your gpu passed through to the VM (and its sound counterpart) start the VM. If you get an error copy that error and also paste it into this thread. Then we can advise you on what settings to try next.
  4. @Josh.5 yes, having the SCHEDULE_FULL_SCAN_MINS set to 0 and it not scanning the library at all, would be great. Most of the time I would just like to point the container at my media and it only convert new things as they are added. However, having the ability to disable the inotify watcher in the template as well may be useful too. I was thinking of using 2 instances of the container. One which would run 24/7 doing only the inotify watcher encodes (probably just limited to using a couple of cores) and the other instance, I would have user scripts start at night and stop in the morning working on only the library encodes but using all the available cores. Yes, using the ram for the cache files is a good idea but I guess one would have to keep the worker threads low otherwise it would use too much ram. I was wondering what the use case of using multiple worker threads beyond one for each library location and one for the inotify watcher would be. I did notice that when running the container on 8 cores it would just about max out those cores. But running it on 16 cores only about 45% of each core is being used. So I guess having more workers would saturate higher core counts? Or is there another use case? Oh also, I forgot to ask before would it be possible to be able to adjust the various video encoding quality settings, please? (although the ones used seem to produce good results that I have seen) Anyway just want to thank you for all you work on this container its great
  5. Hi @Josh.5 Great work. I have had a play with the container and it really nice. My thoughts 1. I can't get the container not to scan the library folder. Setting the variable to zero and when the container starts it will still start going through the files that are there. It would be good if it could be disabled for the scan to happen when the container starts. 2. I am used to looking in the log on the handbrake container to see how far through an encode job is. It would be great if we could see this in the log too. 3. If the container is stopped then the temp file is left. This isn't a problem if the container is started again and it does the same job again. However if for some reason it doesn't (maybe the source file has been moved or deleted) it is left behind. Maybe when the container starts it should erase that temp folder. 4. I agree with @trekkiedj multiple library folders would be great.
  6. Hi, @Josh.5 yes it sounds great. Please let me know when its ready. I would love to try your container. Sounds like a much more elegant solution than mine and much easier. It is annoying having to wait for conversion before watching. Have you seen the unraid port of FallingSnow's H265ize? I had a quick look at that yesterday as it will convert whole folders full of video and was thinking of using it to convert some of my existing media. But your container sounds like it takes that one step further, can't wait to try it.
  7. The 16 -16 etc isn't really telling you the slot speed but what it is currently running at. To get the correct speed you really need to put a load onto the card. It is into do with power saving of the gpu when not being used. It quite easy to see this in a windows VM using gpuz. You will see the speed there that the card can run at under bus speed. Hover over that and it will tell you the speed the card is currently running. Then to the right if you click on the question mark you get the option to put some stress on the gpu and you will see the number change then.
  8. SpaceInvaderOne

    New Permissions tool not working in 6.6.5

    Ha ok, I am going to quietly go away now feeling embarrassed! lol I see, I didn't look closely enough
  9. SpaceInvaderOne

    New Permissions tool not working in 6.6.5

    Changed Status to Open Changed Priority to Minor
  10. The built-in new permissions, not Docker Safe New Perms, but the standard one, doesn't work any longer in 6.6.5
  11. Hi Guys been thinking about how to get sonarr and radarr to auto re-encode downloads automatically using handbrake. There will be 2 parts to this and i will post the next under this one in a few days time. This video the first part shows how to automatically re-encode video files downloaded by sonarr or radarr using an advanced docker path mapping sending the media files through handbrake first before then being processed by sonarr/radarr. I use h265 as i want smaller video files but any format can be choosen. This first part goes through the principles of how this works. The second video will go deeper using detailed examples. It is recommended to watch this video first. PART ONE
  12. SpaceInvaderOne

    Overkill or more powwwwer?

    Hi @trl002 welcome and glad that you have deceided to joined the clan and are officially going to be an 'Unraider' For Plex, yes, i would definetely use docker. It will be fine with 4 users I am sure. There is no point putting it in a vm if you want it running 24/7. The only advantage i can think of using a vm is you can transcode with the gpu should you pass it through. But then it ties up the gpu 24/7. Also with the 12 cores in the 2920 I dont see a problem anyway it will easily handle those streams. Also if you use docker with some intel cpus you can also use the igpu to help in the transcoding. The only time transcoding can become a strain is if you have some movies that are ultra hd 4k videos and each is say 50 gigs a file transcoding them will be harder on the system than transcoding 4 1080 p movies at 4 gigs a piece. 2080 gtx will pass through for your gaming vm fine and the amd 570 should work fine in MacOS and should work with Mojave. I have a similar server to the one you are planning using a 2950x and I really like it. Re your nvme drives. I would install win 10 directly on 1 nvme and do that natively as i did in this vid here https://www.youtube.com/watch?v=fnIn6GnA87c&t=298s Then you can boot baremetal too. For MacOs put the install on another nvme but dont pass it through. Just use a vdisk on the nvme. Performance will be almost the same. Nvme controllers passed to macOS in my experience can be tricky due to MacOS driver issues. Anyway good luck with your build. I am sure whatever you choose it will be awesome
  13. SpaceInvaderOne

    [Support] Linuxserver.io - Letsencrypt (Nginx)

    Probably the problem is due to your isp blocking port 80, which some do. Because of this HTTP authentification will fail. But you can work around this but you will need to buy your own domain. Then sign up for a free Cloudflare account and add your domain to it You would point your own subdomains (using cname) to your duck DNS (example nextcloud. sonarr. radarr. yourdomain.com to myserver.duckdns.org) When this is set up you would then change the template for lets encrypt to use DNS authentification and Cloudflare like this. Then you will need to goto your appdata share then letsencrypt and the folder dns-config Here you will find a file called cloudflare.ini in this file you will need to put your email address which you used to sign up for cloudflare and also your cloudflare api key. Once you have done this, restart lets encrypt and it will validate and generate the certs that you need. Hope that helps.
  14. SpaceInvaderOne

    [solved] No DNS or DHCP lease?

    Is there any reason why you need to have the same ip assigned to a container or to connect through hostname each time it starts. For most purposes you would use the servers ip address and the port which the container is using to access the container. For example for me nzbget is on port 6789. So to access it i type the ip of my Unraid server ( the the port So So if i wanted to configure sonarr to talk to nzbget i would tell it to use that ip as that mapping will not change. However there may be use cases where by you might want to have containers talk by host name. For example using a reverse proxy it is easier to use hostnames. To do this you have to use a custom docker network. I showed how in the video i made on letsencypt reverse proxy here that you may want to check out. https://youtu.be/I0lhZc25Sro?t=688
  15. If you are dropping into the UEFI shell then it is most likely a problem with the install media. It is trying to boot but not finding any UEFI compatible media to boot from. This could be one of 2 things 1. (most likely) that the win 10 media isn't UEFI compatible so doesn't boot. I have had old windows 10 images I downloaded a while back and they aren't UEFI compatible. So get a new image, try downloading Windows 10 directly from the Microsoft website here https://www.microsoft.com/en-us/software-download/windows10ISO 2. The VM template is trying to boot from the vdisk first. So the boot order means the windows 10 install media isn't being booted from. Its trying to boot from the vdisk and cant so dropping into UEFI shell. So delete the windows template that you are using now and start from scratch and start fresh to avoid this.