-
Posts
2,814 -
Joined
-
Last visited
-
Days Won
17
testdasi last won the day on August 17 2020
testdasi had the most liked content!
Retained
-
Member Title
If eating could save the world, I would be the superhero!
Converted
-
Gender
Undisclosed
Recent Profile Visitors
10,277 profile views
testdasi's Achievements
-
This is what annoys me so much that I self-exiled myself from the Unraid forum. These excuses make no sense and yet kept re-inforcing the group-think. Catastrophes can happen. No doubt. It's the probability of such catastrophe. The rate of failure for SSD is order of magnitude lower than a USB stick. In fact, I have purposely tried to kill an SSD to the extent that the TBW SMART counter has corrupted and it still runs fine. I am actively trying to run a a QLC SSD to the ground and it is still holding up. My USB stick has barely had any write and it failed. Self-selection means the people who reply with "I have had no problem" automatically excludes those who have abandoned Unraid after having major issues. My stick has had no issue since 2016, 2 years older than yours! Expect something to happen in the next 2 years? Restoring is not the problem. The problem is that the cause could have been easily be avoided. I'm signing out.
-
I woke up this morning and found my Unraid fails to boot - same story that happens to many people many times, no thanks to Unraid continuous refusal to allow SSD boot... but I digressed. Here's what's different: I mounted the USB stick on another computer and found that it is still mountable but the entire drive is empty, except for a single 0-size file called "RRaA". Usually when a stick is corrupted, it's unmountable on another PC. What's even more weird is the used size reported by Windows is actually as if all the files are still there. For what it's worth, my stick is plugged into a USB 2.0 port and has been working since 2016 so I'm kinda overdue for a corruption. It's just that the corruption is so strange that I just want to report back in case anyone has seen something similar?
-
mvandebrake started following testdasi
-
Have you got a test file I can use to test varken? I don't use it personally so need something to verify that it works correclty in GUS. This is a known bug with Deluge (see bug report below). As binhex himself said "if the plugin is not made for deluge v2 then you will be out of luck until the plugin developer updates the plugin to work with python 3.x". Binhex seems to have implemented his own fix so I think you might have no choice but to use his docker instead. https://github.com/binhex/arch-delugevpn/issues/66 I'll look into it but it may take a while. The DNS queries within your network is not encrypted e.g. DNS between the client (192.168.1.202) and the Pihole docker (192.168.1.3). It's the query out of the docker to the DNS resolver (e.g. from Pihole (192.168.1.3) to google (8.8.8.8)) that is encrypted.
-
For (3) an safer alternative rather than enabling disk share universally is to have custom SMB config file pointing to a top level folder on a disk (e.g. for a share called sharename, have custom SMB config pointing to /mnt/cache/sharename or /mnt/disk1/sharename). Then have the SMB Extras in SMB Settings "include" that config file. That way you just need to restart SMB to change the config file (instead of needing to stop the array to change SMB Extras). Works really well with my cache-only nvme-raid0 share. More detailed guide: Let's say you have a cache-only share called "sharename" that you want a user called "windows" to access via SMB with shfs-bypass. Create a smb-custom.conf with the content below and save it in /boot/config [sharename-custom] path = /mnt/cache/sharename comment = browseable = no Force User = nobody valid users = windows write list = windows vfs objects = Then with the array stopped -> Settings -> SMB -> add this line to Samba extra configuration box: include = /boot/config/smb-custom.conf Apply, done, start array. You can now access the bypassed share at \\tower\sharename-custom or \\server-ip\sharename-custom Some hints It's critical that the name of the bypassed share (e.g. sharename-custom) is DIFFERENT from the normal share name or you will run into weird quirks i.e. Unraid share conflicts with your custom share. To add more shares, just copy-paste the above block in the smb-custom.conf and make appropriate changes (e.g. name, path, user), save and then restart SMB. No need to stop array. Similarly, for edit, just edit smb-custom.conf, save and restart SMB.
-
There shouldn't be any gotchas. Compare the folder structure of these: Telegraf docker: the host folder that corresponds to /var/lib/influxdb GUS: the host folder that corresponds to /data/influxdb They should look identical in terms of structure. If that is confirmed then just turn the dockers off, backup the data before proceeding further (if the data is important to you), delete the existing GUS /data/influxdb content and then just copy the Telegraf /var/lib/influxdb data over (ensuring no change to the previous folder structure). I suggest to use mc from command line (or Krusader or similar 2-panel docker) so it looks a lot more obvious.
-
Yep. Specified that it's 1.3 now. I'll keep an eye out for v1.4.
-
@Hoopster: the latest version of GUS has all the configs enabled, including inputs.apcupsd. However, it won't delete the old Telegraf config. You just need to delete the telegraf.conf file and update to latest version (and restart the docker if already updated) and it should work.
-
Update (23/09/2020): Grafana Unraid Stack changes: Expose Influxdb RPC port and change it to a rarer default value (58083) instead of the original common 8088. Added falconexe's Ultimate UNRAID Dashboard Thanks. It's done. The GUS dashboard is based on Threadripper 2990WX. You will have to customize the default dashboard to suit your own exact hardware. Also give UUD a try to see if you like that layout more.
-
I hate it when apps do that. That 8088 is for Influx backup / restore but their config file doesn't allow disabling that if not used. And to make it worse, they use a rather common value of 8088. Sounds like something on the to do list. I'll have to add that to the expose port so it's transparent to people (and probably pick a different port from 8088 - I'm probably just gonna pick 58083 or something random like that).
-
Show me your docker settings please. Also go to the folder you map to /config and look for influxd.log under influxdb folder and attach it here. Hopefully it tells me what's wrong.
-
You can either edit the graph to change the parameters to use hddtemp or alternatively just wipe your config folder and reinstall with USE_HDDTEMP=no. The panels use SMART, not HDD_Temp. I only included the option because some prefer it over SMART (which is why it defaults to no). I kinda wonder if I should make a UUD edition that includes @falconexe's dashboard. That looks pretty damn sleek out of the box. Try use host network and in the nginx config file, change: set $upstream_app grafana; set $upstream_port 3000; to set $upstream_app 192.168.0.2; set $upstream_port 3006; (Replace 192.168.0.2 with the actual IP of your Unraid server). With bridge network (including custom), you have to map the ports yourself (it's the "Add blablabla" at the bottom of the Docker edit GUI). But that shouldn't be necessary with host network.
-
You just edit add more path mappings in the Unraid GUI (from Dashboard, click on Docker then Edit then scroll to the bottom for the + Add blabla option). The default mapping is for 1 mapping to docker /config (settings) and 1 mapping to /data (watch, downloads etc.) just for simplicity. The one I'm using on my server, I have individual mappings for each /data subfolder so for example e.g. /data/rtorrent/watch, /data/rtorrent/incomplete and /data/rtorrent/complete all point to different folders on the host (they are on different pools on my server). Note that if you map subfolders then you can't map the parent. Settings should not reset after restart, btw. Mine is all fine so you might have done incorrect path mappings. Please attach log.
-
Quoting from your post, I wouldn't call this working. In any case, you are the first report of a primary RX580 pass-through working (albeit intermittently) that I know of so maybe AMD is finally starting to fix stuff. "Think some luck is involved, after starting the VM, and seeing a black screen, I went on to reading forums on the other computer. Prob after couple minutes delay the screen just came on... I tried to passthrough the keyboard and mouse afterwards (quite straight forward, just 2 checkboxes in VM settings), that seems to have messed with the GPU passthrough which didnt work for a while (even when all the settings were the same as before, when it did work :s). Anyways, after a few VM restarts, everything seems to work again................... Luck I tell you, this doesnt make sense to me...."
-
Confused about OpenVPN Tunnles, HTTPS and ReverseProxy?
testdasi replied to questionbot's topic in General Support
Plex support specifically mentioned remote access is encrypted. I guess you can use a Reverse Proxy if you don't trust their words. https://support.plex.tv/articles/206225077-how-to-use-secure-server-connections/ Also even with encryption, your ISP will know you are streaming stuff if they bother to look, just by the pattern and amount of data transfer. Encryption just stops them from knowing WHAT you stream. -
Binhex has implemented multi-remote functionality (see post below). I have deprecated my docker - it was intended as a temp workaround while waiting for binhex anyway.