testdasi

Members
  • Posts

    2812
  • Joined

  • Last visited

  • Days Won

    17

testdasi last won the day on August 17 2020

testdasi had the most liked content!

Retained

  • Member Title
    If eating could save the world, I would be the superhero!

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

9460 profile views

testdasi's Achievements

Proficient

Proficient (10/14)

500

Reputation

1

Community Answers

  1. Have you got a test file I can use to test varken? I don't use it personally so need something to verify that it works correclty in GUS. This is a known bug with Deluge (see bug report below). As binhex himself said "if the plugin is not made for deluge v2 then you will be out of luck until the plugin developer updates the plugin to work with python 3.x". Binhex seems to have implemented his own fix so I think you might have no choice but to use his docker instead. https://github.com/binhex/arch-delugevpn/issues/66 I'll look into it but it may take a while. The DNS queries within your network is not encrypted e.g. DNS between the client (192.168.1.202) and the Pihole docker (192.168.1.3). It's the query out of the docker to the DNS resolver (e.g. from Pihole (192.168.1.3) to google (8.8.8.8)) that is encrypted.
  2. For (3) an safer alternative rather than enabling disk share universally is to have custom SMB config file pointing to a top level folder on a disk (e.g. for a share called sharename, have custom SMB config pointing to /mnt/cache/sharename or /mnt/disk1/sharename). Then have the SMB Extras in SMB Settings "include" that config file. That way you just need to restart SMB to change the config file (instead of needing to stop the array to change SMB Extras). Works really well with my cache-only nvme-raid0 share. More detailed guide: Let's say you have a cache-only share called "sharename" that you want a user called "windows" to access via SMB with shfs-bypass. Create a smb-custom.conf with the content below and save it in /boot/config [sharename-custom] path = /mnt/cache/sharename comment = browseable = no Force User = nobody valid users = windows write list = windows vfs objects = Then with the array stopped -> Settings -> SMB -> add this line to Samba extra configuration box: include = /boot/config/smb-custom.conf Apply, done, start array. You can now access the bypassed share at \\tower\sharename-custom or \\server-ip\sharename-custom Some hints It's critical that the name of the bypassed share (e.g. sharename-custom) is DIFFERENT from the normal share name or you will run into weird quirks i.e. Unraid share conflicts with your custom share. To add more shares, just copy-paste the above block in the smb-custom.conf and make appropriate changes (e.g. name, path, user), save and then restart SMB. No need to stop array. Similarly, for edit, just edit smb-custom.conf, save and restart SMB.
  3. There shouldn't be any gotchas. Compare the folder structure of these: Telegraf docker: the host folder that corresponds to /var/lib/influxdb GUS: the host folder that corresponds to /data/influxdb They should look identical in terms of structure. If that is confirmed then just turn the dockers off, backup the data before proceeding further (if the data is important to you), delete the existing GUS /data/influxdb content and then just copy the Telegraf /var/lib/influxdb data over (ensuring no change to the previous folder structure). I suggest to use mc from command line (or Krusader or similar 2-panel docker) so it looks a lot more obvious.
  4. Yep. Specified that it's 1.3 now. I'll keep an eye out for v1.4.
  5. @Hoopster: the latest version of GUS has all the configs enabled, including inputs.apcupsd. However, it won't delete the old Telegraf config. You just need to delete the telegraf.conf file and update to latest version (and restart the docker if already updated) and it should work.
  6. Update (23/09/2020): Grafana Unraid Stack changes: Expose Influxdb RPC port and change it to a rarer default value (58083) instead of the original common 8088. Added falconexe's Ultimate UNRAID Dashboard Thanks. It's done. The GUS dashboard is based on Threadripper 2990WX. You will have to customize the default dashboard to suit your own exact hardware. Also give UUD a try to see if you like that layout more.
  7. I hate it when apps do that. That 8088 is for Influx backup / restore but their config file doesn't allow disabling that if not used. And to make it worse, they use a rather common value of 8088. Sounds like something on the to do list. I'll have to add that to the expose port so it's transparent to people (and probably pick a different port from 8088 - I'm probably just gonna pick 58083 or something random like that).
  8. Show me your docker settings please. Also go to the folder you map to /config and look for influxd.log under influxdb folder and attach it here. Hopefully it tells me what's wrong.
  9. You can either edit the graph to change the parameters to use hddtemp or alternatively just wipe your config folder and reinstall with USE_HDDTEMP=no. The panels use SMART, not HDD_Temp. I only included the option because some prefer it over SMART (which is why it defaults to no). I kinda wonder if I should make a UUD edition that includes @falconexe's dashboard. That looks pretty damn sleek out of the box. Try use host network and in the nginx config file, change: set $upstream_app grafana; set $upstream_port 3000; to set $upstream_app 192.168.0.2; set $upstream_port 3006; (Replace 192.168.0.2 with the actual IP of your Unraid server). With bridge network (including custom), you have to map the ports yourself (it's the "Add blablabla" at the bottom of the Docker edit GUI). But that shouldn't be necessary with host network.
  10. You just edit add more path mappings in the Unraid GUI (from Dashboard, click on Docker then Edit then scroll to the bottom for the + Add blabla option). The default mapping is for 1 mapping to docker /config (settings) and 1 mapping to /data (watch, downloads etc.) just for simplicity. The one I'm using on my server, I have individual mappings for each /data subfolder so for example e.g. /data/rtorrent/watch, /data/rtorrent/incomplete and /data/rtorrent/complete all point to different folders on the host (they are on different pools on my server). Note that if you map subfolders then you can't map the parent. Settings should not reset after restart, btw. Mine is all fine so you might have done incorrect path mappings. Please attach log.
  11. Quoting from your post, I wouldn't call this working. In any case, you are the first report of a primary RX580 pass-through working (albeit intermittently) that I know of so maybe AMD is finally starting to fix stuff. "Think some luck is involved, after starting the VM, and seeing a black screen, I went on to reading forums on the other computer. Prob after couple minutes delay the screen just came on... I tried to passthrough the keyboard and mouse afterwards (quite straight forward, just 2 checkboxes in VM settings), that seems to have messed with the GPU passthrough which didnt work for a while (even when all the settings were the same as before, when it did work :s). Anyways, after a few VM restarts, everything seems to work again................... Luck I tell you, this doesnt make sense to me...."
  12. Plex support specifically mentioned remote access is encrypted. I guess you can use a Reverse Proxy if you don't trust their words. https://support.plex.tv/articles/206225077-how-to-use-secure-server-connections/ Also even with encryption, your ISP will know you are streaming stuff if they bother to look, just by the pattern and amount of data transfer. Encryption just stops them from knowing WHAT you stream.
  13. Binhex has implemented multi-remote functionality (see post below). I have deprecated my docker - it was intended as a temp workaround while waiting for binhex anyway.
  14. The high level instruction is login (default is admin/admin) then hover mouse over the + sign and pick Import. Then upload the json file from UUD topic.
  15. I don't have a UPS so unfortunately can't really test adding ups functionality. Installing the apcupsd exporting script is (somewhat) trivial but without actual device to test, it's pure luck if anything works. You are probably better off installing the apcupsd-influxdb-exporter docker from the Apps page and point it to your Gus docker IP + 8086 port. Have a look at this guide to see if it helps. You will also have to read up a bit on how to edit grafana dashboard / panel to update the queries. Each person's hardware has some unique parameters (e.g. which sensor to read CPU temperature), making a completely hands-off dashboard virtually impossible. Usually it just involves picking a few values from the drop-down to see which one makes the most sense. The loop is intentional to make sure a vpn connection is established before anything else is run. Assuming your VPN server is working and no issue with Internet connection then the most likely reason is an issue with the ovpn config file. The most frequent mistake is missing credentials (i.e. login). Create a file e.g. login.txt with exactly 2 lines, 1st line is username, 2nd line is password. Save that file in the same folder as the openvpn.ovpn. Then edit your openvpn.ovpn and find auth-user-pass line and add login.txt after it so it's like this: ... auth-user-pass login.txt ... If that still doesn't work then copy-paste your openvpn.ovpn file here and I can help have a look.