Jump to content

JonathanM

Moderators
  • Posts

    16,680
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Possibly, but keep in mind there have been several reports that linux support is much better in a later kernel than what is currently in unraid. With that in mind, I wouldn't expect much of a positive result until unraid's next update that includes the new kernel. I'm sure limetech has internal builds that they are testing, but don't expect to hear about them. Realistically, I'd say wait for the next round of public unraid beta's before even contemplating a ryzen build unless you are a willing guinea pig.
  2. Stop your deluge container before you try to start delugevpn.
  3. Set the unraid to not automatic start. That way you can confirm everything is ok before you start all the services. It's a very bad idea to run the UPS to empty and immediately start everything back up when power returns. Many times the power will go back out again for a little bit after the first time it comes up, and if the UPS batteries are already drained, there will be no way for it to successfully shut down again, plus it's very hard on the UPS batteries to fully drain them. There are only 2 likely scenarios I can think of, either the power goes out so rarely that waiting for your servers to come back on line isn't going to happen hardly ever, or the power goes out so frequently that you really need to rethink the use of just a battery backup, possibly an inverter with a bank of batteries to allow you to ride them out. Either way, automatic restart after a power outage is a bad idea, you need to evaluate each individual situation before you get everything running again.
  4. To do that, you would need to tie the green power-on leads from each ATX motherboard connector together so both the power supplies will be powered up at the same time.
  5. Docker in unraid uses two different network models, bridge requires forwarding explicit ports through, host exposes the whole container, so any ports the container opens are exposed.
  6. If you are connecting the drives to the SATA end then I'm pretty sure you need forward breakout cables. Reverse are used to plug the SATA end into the motherboard or HBA controller and connect the miniSAS end to a hotswap drive box or backplane.
  7. If it's set up the way you have mapped in your illustration, then something downloaded to /data/whatever in rutorrent won't be able to be found by sonarr, because it will be in /downloads/whatever. That is why the mappings have to match, so a full path that works inside one container works in the other. The file will obviously be there, because they both point to /mnt/user/downloads, but the full path inside the container won't match. Your illustration is a good diagram of why it won't work that way.
  8. Honestly, I was thinking you were talking about a VM, not a docker, but the same principle should apply. Problem is, I'm not sure there is a way to pass an entire controller to a docker. In the case of a VM, when you pass the entire controller then the guest VM assigns the ID to the USB device, and can control it directly. When you rely on the host OS to ID the device, the ID will change, as you found out. I don't know if you can even do what you need in a docker.
  9. Unless you manually delete the XML template on the flash drive and totally reinstall the docker, it will keep all your changes. The inability to alter users templates when a docker change necessitates it is actually one of the complaints of the docker authors.
  10. It's no big deal to add or edit the path yourself. Docker path mapping is only simple after you understand it, so I get why you want to change the template, but it's counterproductive because you really do NEED to figure out docker path mapping, and making it almost work by changing only the app side is worse than making it completely different so you need to change it to match the rest of your settings. Just my opinion.
  11. Remove the complete/ from the /downloads <--> /mnt/user/appdata/Downloads/complete/ line. It should be /downloads <--> /mnt/user/appdata/Downloads/ for both containers, or it won't work.
  12. Passing through an entire USB controller instead of the individual device should work.
  13. Krusader docker by Sparklyballs has file compare built in already.
  14. All your /downloads container mappings should be the same, /mnt/disks/Downloads/NZB/ Specify where in that tree to look for completed downloads in the apps themselves, sonarr would be /downloads/completed/TV/, and radarr would be /downloads/completed/Movies/
  15. Sure, just put the gparted iso into the "OS install ISO" slot, hit the update button at the bottom, then edit the xml to change the boot order. <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Debian/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Utilities/gparted-live-0.8.1-3.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> Change the vdisk to boot order 2, and the iso to boot order 1. Once you are done, edit the xml and change them back, or just edit the VM and remove the ISO. Theoretically you can change the boot order with the GUI, but I've never been able to get that to work.
  16. When you say everything, that includes IDENTICAL docker path mappings, right? I suspect your host to container paths are different.
  17. Link in first post of this thread. https://github.com/linuxserver/docker-openvpnas#setting-up-the-application
  18. That means the VPN isn't starting properly, rtorrent won't start if the VPN is enabled in the docker settings and not working. Verify credentials and settings for your VPN.
  19. Yes, you can. Problem is, so can everyone else in the world. If you leave your server open, it will get ruined eventually.
  20. The function needs to remain, but it could do with a renaming. I purposely have shares that I keep data manually switched between fast SSD cache and slower array disks, based on usage. I need that share to write new files to the cache drive, since fresh files are typically referenced frequently for a while. As the data ages, I manually move it to the array folders so it's still available, but not high priority. An automatic version of what I manually do has been discussed in the past.
  21. I agree, I was just trying to be funny, pointing out the title of the thread references spinning up disks, but not disk activity. I suppose if you want to be pedantic the reference to spinning up something has been turned into a colloquialism meaning to start using it.
  22. Don't know how easy it would be, but the obvious answer is to have an option to automatically exclude any filesystems residing on SSD media, since the title of the plugin is about spinning disks.
  23. There have been discussions about this. Honestly, I'm still on the "don't do it" camp If you even entertain a built in solution at all, I'd make it a separate plugin on the unraid side, so by default the base installation is local only. The more hoops people have to jump through to lower security on their server the better. The functionality is already available, OpenVPN and DuckDNS cover 99% of users. For now, just point people to http://lime-technology.com/forum/index.php?topic=54364.0 when they want remote access.
×
×
  • Create New...