jimbobulator

Members
  • Posts

    123
  • Joined

  • Last visited

Everything posted by jimbobulator

  1. I see some discussion of a similar issue on the sonarr forums, it seems that issue was resolved in a recent update. This make me notice I'm running version 2.0.0.3212, while the latest in the AUR is 2.0.0.3357-1 (still a couple months old). unRAID says updates are "Not Available". Is this docker image way out of date, or is something messed up and why can't I see the update?
  2. I've been having an issue with Sonarr lately - it's not picking up anything from the KAT RSS feed. Worked fine for a few months, and nothing has changed to my knowledge. Everything seems to be working, including the manual searches. Nothing obviously wrong in the logs that I can see. Anyone else having similar issues?
  3. You mean the update link from the Sonarr webUI? It kills my config too, at least it did when I tried it a couple months ago... Not sure why it does this, but basically don't try to use Sonarr's auto-update. Wait for binhex to update the docker image and upgrade your via the unRAID webUI. No i did it from the docker page where it says update ready. Weird. Nevermind me then...
  4. You mean the update link from the Sonarr webUI? It kills my config too, at least it did when I tried it a couple months ago... Not sure why it does this, but basically don't try to use Sonarr's auto-update. Wait for binhex to update the docker image and upgrade your via the unRAID webUI.
  5. I have the same problem. Before updating to 6.1.2 (from 6.0.1), I ran out of space in docker.img, so I increased the size from 10 to 15GB. After a couple weeks, without adding any new dockers or doing anything, really, I saw that I was well over 10GB according to the unRAID settings page. I updated to 6.1.2, and I immediately got a notification saying docker.img was 80% full. The next day I get 5 messages, showing 81%, 82%, 83%, 84% and 85%, all within a few hours. No changes for the last few days. On top of this, different places report different utilization: Docker settings shows 9.98/14 or 71% (I think): Label: none uuid: c9b4118d-8c62-40ee-8291-827e79ededcb Total devices 1 FS bytes used 9.98GiB devid 1 size 15.00GiB used 14.00GiB path /dev/loop0 btrfs-progs v4.1.2 The notification I got said 85%: Event: Docker high image disk utilization Subject: Warning [bARAD-DUR] - Docker image disk utilization of 85% Description: Docker utilization of image file /mnt/appdisk/docker.img Importance: warning From the command line, I see that the sum of all the docker images I have is around 4.7GB, which would be 31%. I've monitored the sizes returned by "docker images" over a few days and the sizes aren't increasing at all. root@barad-dur:~# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE binhex/arch-couchpotato latest bda16517d8d1 3 weeks ago 614.2 MB aptalca/docker-plexrequests latest 7eca88a70c04 5 weeks ago 643.8 MB google/cadvisor latest 175221acbf89 11 weeks ago 19.92 MB binhex/arch-sonarr latest b2a57e24dddf 3 months ago 1.013 GB xamindar/syncthing latest 70b7d6227388 3 months ago 371.2 MB binhex/arch-delugevpn latest 5be1c02a894c 3 months ago 930.9 MB needo/plexwatch latest 4052a2f57e29 4 months ago 374 MB needo/plex latest 8906416ebf13 4 months ago 603.9 MB hurricane/ubooquity latest a598b1e14e5d 10 months ago 528.1 MB yujiod/minecraft-mineos latest ff8c61f22de6 11 months ago 604.5 MB Finally, CAdvisor has a different sizes, but same % as calculated above from the docker settings UI: 11.50 GB / 16.11 GB (71%)
  6. I have this problem with the same microsoft keyboard, so I have opted to try passing the entire USB controller through, which I'm fine with. The VM boots up, but I still am losing the keyboard after a few seconds after it (the keyboard) powers up. I'd like to try the suggested fix below from Jude, but in 6.1 things are a bit different and the go file entry isnt apparently needed, so I'm not sure exactly how to do this. Here's the relevant section of my XML at the moment: <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=00:1d.0,bus=root.1,addr=00.2'/> </qemu:commandline> Any thoughts? 00:1d.0 is the USB controller in question.
  7. Just found that thread - will try that out when I get a chance. Thanks! Apologies for hijacking your thread claira...
  8. Sadly I'm using a uATX board, and my sole PCIe slot has a video card for my htpc VM. I should just buy a different keyboard that actually works with device pass-through, but I'm stubborn. Honestly I just got frustrated and haven't had time to look at it since. Yes, I realize that each board will be different but I know there's at least a few people on here with the same MB.
  9. I started this process, but the number of permutations between the 3 or 4 settings in my bios (each with 2-4 options) is ridiculous and I didn't have 4 hours to do trial and error. I found one combo that at least got it so unraid could see 2 independent controllers but I really would prefer seeing all 3. Some generic understanding of what these settings truly mean and how they affect the operation of the hardware would probably simplify the process, but I couldn't find this info online in the time I had available. Any help is appreciated. If I ever finish mapping all the combos out I'll post them for anyone with a similar motherboard.
  10. I use the screengrab feature in OneNote. Windows+S, then drag a box around the part of the screen you want and it goes to the clipboard (or to a OneNote note if you set it up that way). Convenient if you already have OneNote and saves a step of cropping in paint.net.
  11. You will use one docker container per application, this is due to the way docker is intended to work. It compartmentalizes, or containerizes your applications so they are isolated from each other. This makes it easier to update and manage them. It has nothing to do with threads, which you don't really need to worry about. As far as how docker containers takes advantage of your processor, unraid lets you decide which CPU cores and how many to make available to each application. Hope that helps. For performance questions regarding PS and your images, you might have better luck asking for feedback on a photography forum. You won't find as my computer experts but surely someone would have some performance feedback working with similar images on comparable hardware for a reference point.
  12. Potentially obvious point but tripped me up the first time: when you browse to and select the various drivers as you are installing them, be sure to dig down to the amd64 folder for your version of windows. It doesn't dig through the folders recursively to find the driver.
  13. My setup is overpowered for sure. That said, two things to keep in mind - in order to pass through a GPU you need VTd (or the AMD equivalent) technology in your processor, and you need a dedicated GPU because it's currently (maybe ever?) not possible to pass through integrated graphics to a VM. For intel processors, VTd support starts somewhere in the i5 range and somewhere in the Xeon e3 range, and I have no clue for AMD.
  14. You will need a machine capable of passing hardware (GPU/Audio) through to your virtual machine. This means your motherboard and processor must both support IOMMU, called VTd in the intel world. You can find processors that support this via Intel's website and their filters. For the motherboard you'll need to consult manuals and/or search for examples proving it works. You'll want to search the forums and wiki for info on this, there's tons out there and no point rehashing it all here. Also read up on GPU passthrough using KVM, and pick an suitable known-working GPU based on this. I'm doing basically what you propose. I have the server and dockers running most of my applications, although I haven't finished setting up the HTPC VM yet. The GPU passthrough seems to work but I haven't completely tested everything. The VM more of a project than a requirement for me so it's not really a rush. Here's my build for a reference point: - Case: Fractal Design Node 304 - MB: Gigabyte H97N-WIFI (6 SATA ports) - PROC: Intel i5-4590 - RAM: 16GB patriot something or other - 3x 3TB Seagate drives (they were cheap and my storage needs are modest compared to most here) - 250GB Crucial SSD for docker appdata, VM vdisks, etc. - AMD 6540 video card passed through to the VM. Note that some Microsoft wireless keyboards don't play nicely with USB device passthrough on unraid at the moment and require passing a whole controller through. this is the reason I haven't completed my testing, because I'm having difficulty finding the right combination of bios settings to get the USB controllers to enumerate individually so they can be passed through.
  15. I'll be sleeping sadly. I'll have to catch the syndicated version tomorrow. - Canadian with #oldworldproblems
  16. It depends on how you set up the paths for the container. The template maps a directory inside the container to the directory on your server. I don't remember what the default paths are because I changed mine... For example, if you map /media (in container) to /mnt/user/Movies/ (on your server) then in Plex you'd set your share as /media/Movies I think... getting the paths mapped correctly took me a few tries the first time. edit: Maybe in Plex it would be just /media/. Hmmmm. Haha.
  17. There's a rule of thumb that you need about 2000 passmarks or so of performance to transcode a 1080p stream in real time. Obviously there's a lot of details and assumptions in there, like what is the source and destination bitrate? In my experience that's a bit conservative and seems to work out okay. Check out candidate processors on cpubenchmark.net for average passmark scores. I have a i5-4590 (scores around 7000 I think) and it can transcode at least 2 streams without problems but I haven't really tried stressing it too much. My previous server scored 1500 and did fine with a single transcode. i5 is probably a reasonable choice. Regarding number of SATA ports, I have a Gigabyte board with 10 or 12 ports (don't remember) that cost me 125 or so.
  18. http://lime-technology.com/forum/index.php?topic=31735.0 Apparently the powerdown sequence in v6 is much improved... but I still like the plugin because it saves a log file. I don't claim to understand all the details about the differences but if someone can succinctly explain them I'd appreciate it too.
  19. There's a spot on the docker tab that's called "Docker Repositories" with a big text box called "Template Repositories". Put the link there, and click on save. The alternative is to install the community applications plugin from this thread, which removes the need to add templates repositories manually (pretty awesome). Then you search for the docker you want, click create and... boom. Docker! These features are pretty new, some of the documentation floating around won't be perfect as some details might've changed a bit... but give it a few minutes to get your head around it and I think you'll be impressed.
  20. Still working for me (Chrome, Firefox, Iexplorer). Do you have any adblockers or other filters installed for your browser(s)? Yes, in chrome I have uBlock, but same results when disabled. Might be the first time I've ever launched IE on this computer so it's stock, no filters or blockers there.
  21. I have a restart button now, but I lost the WebUI link for each container's context menu. Cleared cache, tried on IE and Chrome v43 with the same results. edit: I don't have a logs link in the context menu anymore either, but only on the "Docker" tab. From "Dashboard" tab I get a logs link. No WebUI link on either one.
  22. I've had the DNS issue intermittently over the last few versions. RC6 was actually fine over several reboots, so I didn't bother with the --dns options and such. All is still good with RC6a, all dockers can resolve DNS after reboot.
  23. Fair enough, I missed this when I read the thread backwards (facepalm). Based on my experience testing it seems that the WebUI does not get the same prioritization, and it's not clear if jonp's term unRAID OS covers the WebUI. If a docker is going crazy and using 100% of all cores, and I can't access the WebUI, I can't stop the docker. Well I can, but not without going to the command line, which it seems LT is trying to avoid users having to do. Not much more than an annoyance for me, but it's an opportunity for improvement. To clarify, my experience is that high CPU load from a docker container makes the WebUI extremely slow, bordering on unusable. I haven't seen it completely crash, but it gets slow enough that it's nearly unusable. I admit I have a low tolerance for this sort of UI behavior...
  24. Can you clarify what you mean by this? I assume we need to explicitly delete the container and recreate it from the image and template? Should we delete the image and re-download that as well? You shouldn't have to delete the entire image, just the container. This can be achieved by stopping container, go to edit page, and just click 'save' - any user data INSIDE the container will be lost of course - but I think all unraid specific containers are set up to map 'appdata' outside the container. Thanks. That was my understanding, but it just wanted it to be clear. Possibly something to add to the first post?
  25. I have been planning to play with CPU pinning with my containers, because I'm running into problems where my CPU is pinned by a docker and I lose the ability to do anything else with the server. Clearly pinning CPUs intelligently will sort this out. That said, in the name of user friendliness, I think this setting this parameter needs to be improved in the Webui and ideally there should a way that unraid to maintain priority for NAS/Webui functionality, whether through default CPU pinning or process prioritization. In my opinion, add-on applications like dockers should be able to take over to the point where you can't interact with it anymore.