• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Julius's Achievements


Newbie (1/14)



  1. Thanks, the Libvrt Hotplug USB App worked for me.
  2. I'd like access to a USB device from within a VM. Problem however is that it uses the same Group as the UnRaid USB stick; Group 4 00:14.08086:a36dUSB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10) USB devices attached to this controller: Bus 001 Device 004: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Bus 001 Device 003: ID 0781:5567 SanDisk Corp. Cruzer Blade Bus 001 Device 002: ID 289b:0505 Dracal/Raphnet technologies Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub USB Device 002 needs to be passed through. Why is this so hard to share with a VM in 2020 ?
  3. The Fix Common Problems script is telling me my ca mover tuning plugin is deprecated, then links to an update fork for me, but there's no url or anything to install it there. Oh, apparently 'Apps' are now the same as 'Plugins' ? Why the different tabs in the menu then?
  4. There is no "Custom: br0" under my docker Network type options. Already gave it its own IP (as I clearly wrote), still it fails saying its IP is, which it is not. I switched off docker support for my unraid entirely. Back to using VMs for all, much easier to maintain, to secure (csf/lfd firewall), no strange translations, soft-linking or proxying, and I found a very good config for pihole using a nginx server.conf with php-fpm here.
  5. Already tried both options, and more. It doesn't work. Unraid runs on 2 interfaces (eth0 and eth1, as bond0) with, I've set the docker to use a free IP, but even when I set it to use the same IP and different ports (81 and 445 for example), there's no pihole web-ui running. The docker keeps failing to start, and when it does start it says it's using, which I did not set. Attached are the network config and docker config. (Good grief, what do people see in those docker containers? It's a complete flaky network-mess, full of translations, redirections and proxies, adding latency and complexity. And extra webservers running just for one app. I keep saying it; VM's are more efficient, easier to maintain and easier to make accessible. But much to my surprise, pi-hole doesn't even properly support being installed on Debian 10, with the shipped php-fpm and nginx, otherwise I already would have done that in the VM's I run on this unraid server.
  6. For me there's never a br0, and I have not set anything aside from defaults. Also, /var/lib/docker/network/files/local-kv.db does not exist on an up to date unraid server. The networking stack is still very flaky in unraid. If I change the docker settings for networking, it can entirely hang the server and make it inaccessible. While all I have here are 2 NICs connected with a bond, so that speed is faster to/from the server. Other than that nothing deviates from the default.
  7. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='pihole-with-doh' --net='bridge' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'DNS1'='' -e 'DNS2'='' -e 'TZ'='Europe/Amsterdam' -e 'WEBPASSWORD'='password' -e 'INTERFACE'='br0' -e 'ServerIP'='' -e 'ServerIPv6'='' -e 'IPv6'='False' -e 'DNSMASQ_LISTENING'='all' -p '53:53/tcp' -p '53:53/udp' -p '67:67/udp' -p '80:80/tcp' -p '443:443/tcp' -v '/mnt/user/appdata/pihole-doh/pihole/':'/etc/pihole/':'rw' -v '/mnt/user/appdata/pihole-doh/dnsmasq.d/':'/etc/dnsmasq.d/':'rw' --cap-add=NET_ADMIN --restart=unless-stopped 'testdasi/pihole-with-doh' 39f5fd7455f1fa5dfed989b1cefa10fdebea845722d37b2dc8861afc9b8c0203 /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint pihole-with-doh (ec98edf36e66ae40c9ead8078e625cbcc65563379fd10c2073956e49ac008095): Error starting userland proxy: listen tcp bind: address already in use. The command failed. Whatever I do, I can't seem to get it running.
  8. My personal idea on this has always been in favor of spinning them down depending on the duration they are generally not spun up again. Basically what jonathan is saying as well, if they're offline for more than, say, 12 hours, I'd say that's a win for disk longevity. I think I also read that in a google datacenter document as well. I now have unraid set to a Spin-down delay of 45 minutes, after having started with 15 minutes I noticed that was moot, since some disks would be doing stuff intermittently right after they were then spun down, like receiving a finishing bit from a backup from a remote server, or loading or unloading files from a device on the LAN. None of the tasks last longer than 45 minutes each, so if a disk is doing nothing for that time, it can be spun down and would not be accessed soon after. We don't stream stuff from the unraid server, so you may want to consider setting it to the max duration of any video you access from it, that does make sense I guess. Depending on how many disks, how you have divided the use of them under Shares (I have some of the 10 disks in the array excluded for parts, so they can remain spun down longer), and how many people are accessing your server. I have a really large cache (1TB) so that helps as well for the reads and re-reads from SSD that don't require new access from the array.
  9. Oh I wasn't even checking. I thought it did use apache, since I saw that being mentioned somewhere. I'll edit my post, because that part was irrelevant anyway. It's not the apache vs nginx that makes the VM better (for me).
  10. I've tested these two options; 1) this Piwigo docker, accessing a separated mariadb instance. Rather complex and slow loading large amounts of images. 2) Piwigo, mariadb, nginx, php-fpm and CSF, all on 1 debian minimal VM. Both instances of piwigo access the exact same folders from an unraid share with terabytes of imagefiles. The second option performs noticeably faster, even without doing proper IO tests etc. The difference is so obvious, that I'm not even going to bother testing it with tools. Could be because I run unraid with a decent Xeon and 32GB RAM, but still; I don't see any advantage over docker instances for piwigo, and I just wanted to share that, because frankly, setting up piwigo in that VM was so much easier, other than maybe using a little fewer resources I don't understand what all the fuss on having it as a docker instance is about. The SSL/TLS cert for NGINX is located in an Unraid Share with Unraid Mount tag in the VM. Same LetsEncrypt wildcard cert I use for the unraid UI. So no weird proxying or network complexities. Plus, a csf/lfd firewall in front of the piwigo server VM, allowing me to serve the stuff through my internet-router to the world.
  11. No end in sight. I have no idea why. In the bottom left corner it says; "Array Stopping•Retry unmounting user share(s)..." Nothing is connected to it, except that one browser tab. This is super time-wasting. I try to change config, adding a disk etc. Only way to get there is by powering down and immediately pressing to stop the array when it starts.
  12. I don't think you've read correctly, the AER is doing the reporting of the corrected errors, not the correcting itself. The AER driver receives the corrected error notification but fails to clear it. Besides, the error is not coming from the card, but from the pcie hardware on the mainboard. Of course I can try a different card, but I doubt it will make a difference, the source is in unRAID's linux kernel, not the card's hardware. (Already one of the best ones with vast config options and up to date firmware.)
  13. Found out it is a known kernel error for many linux distros relating to Advanced Error Reporting; http://billauer.co.il/blog/2015/10/linux-pcie-aer/ and apparently can be switched off per device. See also https://gist.github.com/Brainiarc7/3179144393747f35e5155fdbfd675554 Problem is, I can't test for days yet, because parity is being recreated here.
  14. OK, that's reassuring, thanks. I asked because I noticed /etc/ssh got new updated ssh_host* keys while I had not rebooted the server. Perhaps stopping and starting the array re-generates keys if some do not exist there?
  15. Hmmm.. apparently doesn't make one bit of difference either how I set boot options for the card, or if the boot-flash is even there or not. As soon as I use this LSI 9207-8i card in this PCI-e slot, I get this; Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 134131744 bytes) in /usr/local/emhttp/plugins/dynamix/include/Syslog.php on line 20 Note that *everything* else is error-free on this unRAID server and its hardware. Attached is the latest run that filled up syslog.. syslog.zip