Jump to content

lovingHDTV

Members
  • Content Count

    477
  • Joined

  • Last visited

Everything posted by lovingHDTV

  1. is anyone running this docker in anything other than bridge mode?
  2. I'm trying to move all my dockers to my IOT vlan. I've successfully moved plex, deluge and nzbget. I can access them via their web interfaces fine. I also moved Sonarr and can access its web interface fine. However I cannot get Sonarr to connect to either of the downloaders. I did update the ips to the new ips, they are all on the same subnet. I opened a console into the Sonarr docker and can ping the IPs for the two download clients. Any ideas what I need to do to get Sonarr to be able to connect to the download clients? Essentially I've moved everything from bridge to br0.10. One additional comment: it connects to jackett just fine after I moved jackett as well. thanks david
  3. Ok how do I change that? I don't see that option in the VM settings any longer. I do see the btrfs as 20GB. NM figured it out
  4. OK I changed the disk size from 1GB to 20 and now it created the file. thanks
  5. I'm trying to setup VM for the first time and cannot find the libvirt file. Do I need to download that somewhere? I didn't see that step in the wiki. thanks, david
  6. I just built a similar system except I used E5-2680v3. So far it has been good. The IOMMU grouping is pretty good out of the box and I can use my GPU in the plex container without issues. I have had one time that samba went unresponsive, but I'm hoping that isn't a common occurrence. I do wish the board had IPMI, but it isn't a deal breaker for me.
  7. I was trying to copy a small file over to my cache drive. It took several minutes to copy, instead of a few seconds. I took at look at the processes and see find mdrecoveryd unraidd taking 200% of the CPU and a 2% wa. Any ideas what kicks off find, mdrecoveryd and unraidd? Nothing else was running. thanks david
  8. I've been using this in my box, but with my recent upgrade of hardware I no longer need it. It has been flashed to IT mode and ran without issues with UnRaid. $40+ Shipping CONUS thanks, david
  9. I'm looking for the block-off plate for the bottom power supply of the original Cooler Master Stacker case. I know that LimeTechnology sold a lot of these as preconfigured systems and I'm hoping someone may have an extra blocker plate laying around. It let's you put 2x80mm fans in down there. Mine has been a gaping hole for years after I went from dual power supplies to a single. If so please PM me, david
  10. Nevermind, I found that my .htpasswd was located at /config/nginx/site-confs/.htpasswd. moved it to the correct place and everything started working.
  11. OK I narrowed it down to my password file. If I remove it from the site-confs/default I can access everything internally and externally. If I put in: auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; I immediately get a 403 Forbidden message. no chance to even enter the password. I tried Edge, as I hadn't use it and I did get the password prompt before getting the 403 message.
  12. I tested the hole and can't discern which direction the air is flowing. It isn't extreme in any condition. I'm not too worried as it has been functioning for 12 years this way. It has been on my list to block off, but obviously not very high up on the list
  13. I mistakenly clobbered my letsencrypt docker. Hint don't install two dockers with the same name, even mistakenly. So I started over new and followed the same walk through as I did last time, but things didn't work this time. https://cyanlabs.net/tutorials/the-complete-unraid-reverse-proxy-duck-dns-dynamic-dns-and-letsencrypt-guide/ I filled in the docker just like the tutorial says, but using my data which is also on duckdns.org. It first went wrong after I started the docker and I couldn't even connect to get the "Welcome to our server" message. When I connect to port 81 I get "site cannot be reached, connection refused". I continued, thinking that now I may need more configuration to get it working. After completing the setup and adding a /sonarr subdirectory I still get that message for port 81, but now for port 444 I get a password prompt, which I enter and then it gives me 403 Forbidden NGINIX 1.16.1. I was happy to see the username/password prompt, but the 403 is annoying. It happens for every subdirectory. Any ideas? thanks david
  14. There are fans in front of the drives pulling air in from the outside and over them. As I have 3x120mm fans pulling in fresh air and only 2 exhausting air, I suspect that air exits the hole in the bottom. I'll test that out and see.
  15. I tried every hyper visor I could find. None of them shared the GPU in a manner that the unraid-nvidia plugin would recognize the GPU. Good question on the airflow. I did my best to draw it up. There are 3x120mm fans in the 4-in-3 adapters, one 80mm fan in the top, the power supplies fan and the 120mm in the back of the case. The final fan is an 80mm in the side panel that brings in fresh air immediately below the cpu fans. The CPU fans move air upwards.
  16. Following: https://blog.linuxserver.io/2017/05/10/installing-nextcloud-on-unraid-with-letsencrypt-reverse-proxy/ When I create the nextcloud file for letsencrypt the bottom has a location section. Nothing in the wording says what to set that to. I'm assuming that should be our nextcloud IP:port? Nextcloud was working until I put in the letsencrypt stuff. Now nextcloud doesn't work and neither does my letsencrypt. thanks david
  17. Before giving up on virtualization, I tried Hyper-V server 2016. It took a bit to get it setup so that I could remote manage it from my desktop. Finally found an article that steps you what needs to be done for authentication if you don't have AD running. I'd imagine most home users don't have AD setup. Here it is in case anyone else chooses to go down this route. https://blog.ropnop.com/remotely-managing-hyper-v-in-a-workgroup-environment/ With Hyper-V I found that they don't support exporting the GPU anymore. Not sure if there was a way, but as that was my sticking point I baled on Hyper-V as well. I also read they didn't export the USB drive, but I was hoping to maybe use Plop to work around that, but didn't explore that at all. I could go with ProxMox and run Plex in its own VM, but I really like the integrated manner of UnRaid. Can you create a virtual switch in UnRaid to have VMs talk to each other? This AM I started to pull my old system a part and replace it with the new hardware. I pulled the old power supply and MB out. All I left were the drives. I then had to reconfigure the stand-offs as I was going from a m-ATX to an e-ATX. Yep that one little letter means a whole lot!! My case supports e-ATX, but there were two stand offs that didn't align. The upper left (by the memory and IO plate) was off by half the bolt thickness. I could push and prod to see half the thread, but couldn't get the screw in. I was able to leave the stand off in the system to have it support the board, but no grounding there. The next was the center top. For whatever reason there isn't even a stand off any where near here. I don't know if the standard changed or if my case just came out early (it is my original cooler master I bought when I first built my UnRaid box back in 2002?). I proceeded to install the power supply/cables, graphic cards and took the time to swap out some 4-in-3 cases with others I had that all match and have a CoolerMaster face plate. I put both my new GTX 1060 and my old card in the hopes that I could use my old card for UnRaid and then pass the GTX 1060 to Plex. However after booting I heard 8 beeps and nothing happens. So I look up beep codes in the MB document and it isn't there. Hooked my monitor up to the Nvidia card and the screen lit up. I got an AMI error Out of PCI-E resources. I've never heard of that. So the swapped the slots of the two cards and rebooted. 8 beeps again. For some reason it doesn't like both cards at the same time so I ditched the older card. I had a 4T drive I wanted to put into the system and swap out a 2T drive so I installed that drive as well. It is now preclearing at 53% done and 153MB/s and at the same time doing a parity check at 150MB/s. I'm happy with that. My parity checks previously were 90MB/s. Here is a picture after I was all done. I have a hole in the bottom as that is were the second power supply can go. I only use a single supply now that they are powerful enough. I wish I had a plate to block it off. I've ordered a couple more case fans to replace the old ones. They will be PWM controllable and have places on the MB to plug them in. Today they are the really old ones that plug into the old style power plug. I'll be able to pull that power cable out when I get the new fans. Now the front with matching 4-in-3 bezels. Yep my 'old man' glasses as I can't see the front panel connections anymore. And the best shot of all, the Dashboard
  18. Ok I took a few hours today, created an account, downloaded, installed and created a VM on ESXi. There was a nice walk through here As my goal for a VM solution was stability and ease of use and both have failed to deliver, I'll now just deploy the system as is bare metal like I've had running since UnRaid first got release so many years ago.
  19. I'm trying to get this to work with UnRaid running in a VM in ProxMox. It works fine when I boot UnRaid bare metal, but it doesn't work when within a VM. I think I have the GPU shared properly as lspci -v between ProxMox VM and baremetal appear almost the same. The PCI number and IRQ are different. Maybe someone here can spot a difference. BareMetal: UnRaid 02:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Gigabyte Technology Co., Ltd GP106 [GeForce GTX 1060 3GB] Flags: bus master, fast devsel, latency 0, IRQ 26, NUMA node 0 Memory at ef000000 (32-bit, non-prefetchable) [size=16M] Memory at c0000000 (64-bit, prefetchable) [size=256M] Memory at d0000000 (64-bit, prefetchable) [size=32M] I/O ports at 5000 [size=128] [virtual] Expansion ROM at 000c0000 [disabled] [size=128K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Legacy Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [250] Latency Tolerance Reporting Capabilities: [128] Power Budgeting <?> Capabilities: [420] Advanced Error Reporting Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Capabilities: [900] Secondary PCI Express <?> Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia 02:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1) Subsystem: Gigabyte Technology Co., Ltd GP106 High Definition Audio Controller Flags: bus master, fast devsel, latency 0, IRQ 10, NUMA node 0 Memory at f0080000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting ProxMox:UnRaid 01:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Gigabyte Technology Co., Ltd GP106 [GeForce GTX 1060 3GB] Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at c0000000 (32-bit, non-prefetchable) [size=16M] [virtual] Memory at 800000000 (64-bit, prefetchable) [size=256M] Memory at 810000000 (64-bit, prefetchable) [size=32M] I/O ports at d000 [size=128] [virtual] Expansion ROM at c1020000 [disabled] [size=128K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Legacy Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [250] Latency Tolerance Reporting Capabilities: [128] Power Budgeting <?> Capabilities: [420] Advanced Error Reporting Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia 01:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1) Subsystem: Gigabyte Technology Co., Ltd GP106 High Definition Audio Controller Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at c1000000 (32-bit, non-prefetchable) [size=16K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting The only thing I can find different is this biggie dmesg | grep GPU [ 271.949382] NVRM: GPU 0000:01:00.0: RmInitAdapter failed! (0x26:0xffff:1133) [ 271.949691] NVRM: GPU 0000:01:00.0: rm_init_adapter failed, device minor number 0 From googling around it sounds like it is 1 of 2 things. Either the GPU is broken (which I know otherwise because it works baremetal), or the NVidia drivers are loaded wrong. Odd here because it is the same drivers, etc. Hoping someone may see something I missed. thanks david
  20. flashed the SAS 3008 to IT mode. Got my SASmini-hd cables. Everything is working on bare metal, SAS, SATA, GPU shared to Plex Docker. I lose GPU sharing when running under ProxMox. I can see the GPU in unRaid but something still isn't quite there because it fails to load. I'll play a couple more days trying to get GPU sharing down, but after that I'll just deploy the bare metal build. I may skip ProxMox even if I get GPU sharing to work as I've not got a lot of confidence in ProxMox for GPU sharing right now. Maybe time to look at HyperV?
  21. Not sure if this will help, but I can currently see the GPU with lspci -v, but for some reason I still get: Aug 20 09:53:53 Tower2 kernel: NVRM: GPU 0000:01:00.0: RmInitAdapter failed! (0x26:0xffff:1133) Aug 20 09:53:53 Tower2 kernel: NVRM: GPU 0000:01:00.0: rm_init_adapter failed, device minor number 0 01:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1) (prog-if 00 [VGA controller] ) Subsystem: Gigabyte Technology Co., Ltd GP106 [GeForce GTX 1060 3GB] Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at c0000000 (32-bit, non-prefetchable) [size=16M] [virtual] Memory at 800000000 (64-bit, prefetchable) [size=256M] Memory at 810000000 (64-bit, prefetchable) [size=32M] I/O ports at d000 [size=128] [virtual] Expansion ROM at c1020000 [disabled] [size=128K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Legacy Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [250] Latency Tolerance Reporting Capabilities: [128] Power Budgeting <?> Capabilities: [420] Advanced Error Reporting Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?> Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia I've searched for this error, but didn't ever find an answer.
  22. Right now you can see that I have used the outer two on one side. I tried using the outer slots, so one dimm per group of slots for each CPU and the system wouldn't boot. I only got it to boot by putting them both on the same side.
  23. Update: the machine works well. I'm just trying to get ProxMox to pass the GPU through to unRaid for Plex. I ran two preclears at the same time and sustained 150MB/s on both of them. I was happy with that. Here is a picture of it on my workbench. I didn't have a front panel laying around so grabbed as spare tower and used it for testing. I'm still not 100% on the memory slot positioning. I know that for dual channel systems they recommend a DIMM for each channel to improve memory bandwidth. The supermicro board manual doesn't provide any details. So do I put them both in blue or one in blue and one in black?
  24. I'm currently trying to get the GPU to pass through. Has anyone done this successfully? I have gotten it working when I boot straight into UnRaid using Plex, but I cannot get it to work with using ProxMox. I followed this guide, but did not include downloading the ROM. Still looking for the right way to do this. thanks
  25. I figured out that I didn't check the UEFI boot mode when I made my unRaid USB drive and that is probably why I couldn't get UEFI to boot and had to revert to SeaBIOS. I'll remake the USB stick and try again. UPDATE: Yes checking the UEFI button in the UnRaid download tool does in fact let me boot in UEFI mode and it boots properly off the USB drive in this manner. thanks