• Content Count

  • Joined

  • Last visited

Everything posted by b0m541

  1. The issue with being unable to mount remote shares if DNS domain names are used for the server has been fixed in 2020.11.25a. @dlandon Thank you for the quick solution! Great responsiveness!
  2. Thats what I did, since then 502 Bad Gateway, also still after the most recent update today.
  3. None of the external programs such as ffmpeg or ebook-convert seem to some with the container or are not properly set up. Am I missing the point of using containers here? Why would those program names be pre-configured, but the programs are missing?
  4. After the latest container update 2 days ago the database needed to be rebuilt. I made a completely new re-install, and now I get "502 Bad Gateway" on port 5000. Anyone else with the same issue? How to fix?
  5. It definitely must be the DNS name handling. When I use the IP of the server, it does work. The DNS resolution does work on unraid using ping. forward and reverse zone are properly set up. One note: the servername I had used is a CNAME / alias. The reverse lookup of that IP is a different DNS name. I tried with the A record name instead of the CNAME. The online check does also fail. PS: I am quite sure that for diagnosing that, the diagnostics would have been of no big help.
  6. Sorry, you asked for a screenshot So the names are as follows: Source: //servername.subnet.tld/foldername Mountpoint: SERVERNAME.SUBNET.TLD_foldername servername small caps, including alphanumeric characters and "-" dash all other components: small caps alphabetic characters How does the online test work? ping servername.subnet.tld shows that ICMP ECHO works. Also mounting the share manually on unraid does work without problems.
  7. Sure, knock yourself out: The share is mountable on other machines, there are no additional firewall filters for the unraid host. The mount point name is the one generated by the plugin.
  8. This is not about any particular file in itself. The whole set in itself is deanonymzing you 100%. Every setup described in that detail is unique. Some of the files would be sufficient to verify the dataset against a given setup, as e.g. the combination of hardware drive names in use.
  9. I saw that they are anonymized, I looked at them, and my concerns are still there. The concept to post a complete set of diagnostics data publicly for anyone to read who desires to me is conflicting with the concept of data thriftyness. A request to Limetech will not resolve that conflict. I honestly doubt that all that data will be necessary to actually find why the mount will not work. I would assume it is also in your interest if users are willing to look into the problem on their own. So if you let them know, where to look, where is the harm? I understand th
  10. Oh I see. No offense, that data is super privacy invasive and I do care about that. Please let me know what exact files you need to see where the problem is, I might then as well be able to understand the problem on my own.
  11. Mounting remote SMB shares does not work for me in version 2020.11.23b I can create an entry fine, and while doing that it will show the available shares. After the entry is created the "mount" button is greyed out and cannot be clicked. I did not find any messages relating that in the regular log files, maybe I looked in the wrong location. This is a rather urgent topic for me, since I have several containers who use remote shares.
  12. Same problem here. The web ui works, searching on the UI is working OK. However, when using the button "Lookup in Browser"in Picard it will produce the same unspecific error message Internal Server Error Oops, something went wrong! Error message: (No details about this error are available) Time: 2020-11-22 11:15:21 UTC Host: 3ba754cc6c04 URL: http://myunraid.test:6000/taglookup?artist=redacted&release=redacted&track=redacted&tracknum=redacted&duration=redacted&filename=redacted&tport=8000 Request data: $VAR1 = { 'query_parameters' => { 'file
  13. Hey guys, I have a problem with the unifi controller that I cannot figure out and fix on my own. I had the latest branch container running. The whole thing started with an UAP AC Mesh that was connected using mesh to a wired UAP becoming disconnected and when power-cycled not being able to get connected again. I then wired that UAP AC Mesh to the subnet where the controller and all unifi components are wired. A reset and even a factory reset resulted in the UAP AC Mesh being shown as "Adopting" in the controller but not making any progress. I then realized t
  14. This may be normal for this container, I wouldn't call it optimal. So you are right, the logs count against "Writable" instead of "Log", and actually the logs should be placed in the log folder on appdata... But hey, you get what you pay for and thanks for your help.
  15. I am wondering, are there no others using SNMP to monitor /manage their UPS? Is nobody else having this problem? If so, there must be a way to do it right / better, an I would like to learn how. Is nobody currently maintaining the NUT plugin?
  16. Is it normal that the UNMS container is so bloated? unms container: 3.67 GB writable: 1.81 GB log: 28.2 MB If not, what can I do to get it back to normal?
  17. That did work as long as 5.8 was part of current. Now current comes with 5.9 and the libnetsnmp is 40, not 35. This does not work with the NUT 2020 05 plugin, which expects 35. Unfortunately NONE of the slackware64 release has libnetsnmp 35! I guess we really need a sustainable solution now. Any ideas?
  18. Is it normal that no data points are exported whole the UPS is running on battery? It does only export data point while the UPS is on line input. This is what the debug log gives: [DEBUG] http://localhost:8086 "POST /write?db=nut HTTP/1.1" 400 156 Traceback (most recent call last): File "/src/nut-influxdb-exporter.py", line 113, in <module> print(client.write_points(json_body)) File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 594, in write_points return self._write_points(points=points, File "/usr/local/lib/python3.8/site-packages/influxd
  19. Connecting to InfluxDB host:xyz, DB:nut Connected successfully to InfluxDB Connecting to NUT host xyz:3493 Connected successfully to NUT Error connecting to InfluxDB. The same with DEBUG on: [DEBUG] http://localhost:8086 "POST /write?db=nut HTTP/1.1" 400 158 Traceback (most recent call last): File "/src/nut-influxdb-exporter.py", line 113, in <module> print(client.write_points(json_body)) File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 594, in write_points return self._write_points(points=points, File "/usr/local/lib/python3.8/site-packages/influxdb/client.p
  20. [myups] driver = snmp-ups port = <myups-ip> snmp_version = v3 secLevel = "authPriv" secName = "mysnmpuser" authProtocol = "MD5" privProtocol = "DES" authPassword = "myauthpw" privPassword = "mycryptpw" pollfreq = 15 I have SNMPv3 working fine with MD5 and DES. Unfortunately both algorithm today are not considered sufficiently secure. So I would like to replace MD5 with SHA and DES with AES. My UPS supports MD5 and AES fine which I can test using snmpwalk. Unfortunately when I use authProtocol = "SHA" privProtocol = "AES" NUT will not start and claims to not find
  21. OK, got it. Lets say I wanted to have a swap file of 128GB, just in case... What is the recommended route to go? Put swap on HDD oder SSD? For that I would need to reformat a drive or the cache pool from BTRFS to XFS. If the best way would be to take the cache SSD pool from BTRFS to XFS, what is the recommended strategy to get from a) to b)? Note: I have empty drives lingering in the array, would be easy to reformat one from BTRFS to XFS, but I guess putting swap on the cache pool is better?
  22. I just realize that my unraid machine does not have swap enabled. I did not find much information here in the forum. Does unraid by default use a swap file or do I need to enable that manually somewhere? Of course, I can do it using this plugin, I am just wondering if there is a standard mechanism in the unRAID UI for swap and I just didn't find it.. I also found that swap files are not supported on BTRFS on Linux earlier than version 5, which is a pity, because all my drives use BTRFS, also the SSD Cache drives. I never had memory problems, but wi
  23. Good to know about the blacklist properties, did not find that information elsewhere. That could explain why there is no replace key button. So my only option is to use another USB drive I have, which may not be reliable, until I receive a new drive.
  24. Yes of course, have you read my posting? To be very clear: the button "Replace Key" is _not_ there, because it is saying the drive is blacklisted (as probably every drive you used before). There is also no button "Purchase Key" on the registration tab. The UI clearly says one should contact support. So if someone would know how to get a trial or how to transfer the license to a previously blacklisted drive, that would help.