Jump to content

b0m541

Members
  • Content Count

    47
  • Joined

  • Last visited

Community Reputation

1 Neutral

About b0m541

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Same problem here. The web ui works, searching on the UI is working OK. However, when using the button "Lookup in Browser"in Picard it will produce the same unspecific error message Internal Server Error Oops, something went wrong! Error message: (No details about this error are available) Time: 2020-11-22 11:15:21 UTC Host: 3ba754cc6c04 URL: http://myunraid.test:6000/taglookup?artist=redacted&release=redacted&track=redacted&tracknum=redacted&duration=redacted&filename=redacted&tport=8000 Request data: $VAR1 = { 'query_parameters' => { 'filename' => 'redacted', 'artist' => 'redacted', 'duration' => 'redacted', 'tport' => '8000', 'track' => 'redacted', 'tracknum' => 'redacted', 'release' => redacted' }, 'body_parameters' => {} }; We're terribly sorry for this problem. Please wait a few minutes and repeat your request — the problem may go away. I am wondering what the parameter tport is actually.. it does at least not match the TCP port of my musicbrainz mirror. There are no log messages written by the musicbrainz mirror process when this error occurs! Any ideas what's wrong with my installation and how to fix it?
  2. Hey guys, I have a problem with the unifi controller that I cannot figure out and fix on my own. I had the latest branch container running. The whole thing started with an UAP AC Mesh that was connected using mesh to a wired UAP becoming disconnected and when power-cycled not being able to get connected again. I then wired that UAP AC Mesh to the subnet where the controller and all unifi components are wired. A reset and even a factory reset resulted in the UAP AC Mesh being shown as "Adopting" in the controller but not making any progress. I then realized that UDP port 10001 was not showing in the docker port mappings and experimented with removing port mappings from the Controller container, but after starting the containers the mappings still were there. I could not find an explanation for the UDP 10001 mapping being missing (also after re-installation) or removed mappings still being shown. So I re-installed the container from scratch with default mappings. After that the container would start, but the UI would _not_ work any more. So I tried the branches 5.9 and 5.8 but none of that worked. Also restarting Docker or the server would not help! Its saying "UniFi Controller is starting up... Please wait a moment" So now I am stuck with a non-working cointainer! (I do have a backup the the "latest" appdata, but my appdata backup folder got somehow wiped - of course when it is needed. But the are: unifi-controller/data/backup/5.14.23.unf unifi-controller/data/backup/6.0.20.unf unifi-controller/data/backup/6.0.22.unf) So I guess I should be able to roll back to 5.14.23. The UAP AC Mesh (default login) shows: Status: Unknown[11] (http://"controller-ip":8080/inform) which looks fine Any help is welcome!
  3. This may be normal for this container, I wouldn't call it optimal. So you are right, the logs count against "Writable" instead of "Log", and actually the logs should be placed in the log folder on appdata... But hey, you get what you pay for and thanks for your help.
  4. I am wondering, are there no others using SNMP to monitor /manage their UPS? Is nobody else having this problem? If so, there must be a way to do it right / better, an I would like to learn how. Is nobody currently maintaining the NUT plugin?
  5. Is it normal that the UNMS container is so bloated? unms container: 3.67 GB writable: 1.81 GB log: 28.2 MB If not, what can I do to get it back to normal?
  6. That did work as long as 5.8 was part of current. Now current comes with 5.9 and the libnetsnmp is 40, not 35. This does not work with the NUT 2020 05 plugin, which expects 35. Unfortunately NONE of the slackware64 release has libnetsnmp 35! I guess we really need a sustainable solution now. Any ideas?
  7. Is it normal that no data points are exported whole the UPS is running on battery? It does only export data point while the UPS is on line input. This is what the debug log gives: [DEBUG] http://localhost:8086 "POST /write?db=nut HTTP/1.1" 400 156 Traceback (most recent call last): File "/src/nut-influxdb-exporter.py", line 113, in <module> print(client.write_points(json_body)) File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 594, in write_points return self._write_points(points=points, File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 672, in _write_points self.write( File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 404, in write self.request( File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 369, in request raise InfluxDBClientError(err_msg, response.status_code) influxdb.exceptions.InfluxDBClientError: 400: {"error":"partial write: field type conflict: input field \"input.voltage\" on measurement \"ups_status\" is type integer, already exists as type float dropped=1"} Error connecting to InfluxDB. Looks to me like the exporter wants to write as integer if NUT provides no decimal point for a measurement that at other times contains a decimal point (this is the case for voltage, and when running on battery NUT measures as voltage 0 instead of 0.0). I would suggest to cast all numerical measurements that main contain a decimal point always to float and not letting the interpreter decide the variable type automatically. As a fix i just removed to conversion to integer -> no more errors in the log: def convert_to_type(s): """ A function to convert a str to either integer or float. If neither, it will return the str. """ try: float_var = float(s) return float_var except ValueError: return s
  8. Connecting to InfluxDB host:xyz, DB:nut Connected successfully to InfluxDB Connecting to NUT host xyz:3493 Connected successfully to NUT Error connecting to InfluxDB. The same with DEBUG on: [DEBUG] http://localhost:8086 "POST /write?db=nut HTTP/1.1" 400 158 Traceback (most recent call last): File "/src/nut-influxdb-exporter.py", line 113, in <module> print(client.write_points(json_body)) File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 594, in write_points return self._write_points(points=points, File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 672, in _write_points self.write( File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 404, in write self.request( File "/usr/local/lib/python3.8/site-packages/influxdb/client.py", line 369, in request raise InfluxDBClientError(err_msg, response.status_code) influxdb.exceptions.InfluxDBClientError: 400: {"error":"partial write: field type conflict: input field \"ups.temperature\" on measurement \"ups_status\" is type integer, already exists as type float dropped=1"} Error connecting to InfluxDB. The nut-influxdb-exporter container log is full of these. The problem is that there are long time periods during which no values are written by the exporter into influxbd. The curious thing is, that in the influxdb log I see frequent periodic POSTs of the exporter! [httpd] 172.17.0.1 - nut-unraid [02/Aug/2020:15:23:29 +0200] "POST /write?db=nut HTTP/1.1" 400 156 "-" "python-requests/2.23.0" 5b11b2e5-d4c3-11ea-acab-0242ac110003 9191 ts=2020-08-02T13:23:40.173464Z lvl=info msg="Executing query" log_id=0ONPV9xl000 service=query query="CREATE DATABASE nut" [httpd] 172.17.0.1 - nut-unraid [02/Aug/2020:15:23:40 +0200] "POST /query?db=nut&q=CREATE+DATABASE+%22nut%22 HTTP/1.1" 200 58 "-" "python-requests/2.23.0" 6124c50e-d4c3-11ea-acaf-0242ac110003 408 [httpd] 172.17.0.1 - nut-unraid [02/Aug/2020:15:23:40 +0200] "POST /write?db=nut HTTP/1.1" 400 156 "-" "python-requests/2.23.0" 612bbc82-d4c3-11ea-acb0-0242ac110003 7150 [httpd] 172.17.0.1 - nut-unraid [02/Aug/2020:15:23:50 +0200] "POST /query?db=nut&q=CREATE+DATABASE+%22nut%22 HTTP/1.1" 200 58 "-" "python-requests/2.23.0" 67398c3a-d4c3-11ea-acb5-0242ac110003 364 [httpd] 172.17.0.1 - nut-unraid [02/Aug/2020:15:23:50 +0200] "POST /write?db=nut HTTP/1.1" 400 156 "-" "python-requests/2.23.0" 67408967-d4c3-11ea-acb6-0242ac110003 8965 [httpd] 172.17.0.1 - nut-unraid [02/Aug/2020:15:24:10 +0200] "POST /query?db=nut&q=CREATE+DATABASE+%22nut%22 HTTP/1.1" 200 58 "-" "python-requests/2.23.0" 736c2a5c-d4c3-11ea-acc3-0242ac110003 352 [httpd] 172.17.0.1 - nut-unraid [02/Aug/2020:15:24:10 +0200] "POST /write and so on... For the fix see the next two postings.
  9. [myups] driver = snmp-ups port = <myups-ip> snmp_version = v3 secLevel = "authPriv" secName = "mysnmpuser" authProtocol = "MD5" privProtocol = "DES" authPassword = "myauthpw" privPassword = "mycryptpw" pollfreq = 15 I have SNMPv3 working fine with MD5 and DES. Unfortunately both algorithm today are not considered sufficiently secure. So I would like to replace MD5 with SHA and DES with AES. My UPS supports MD5 and AES fine which I can test using snmpwalk. Unfortunately when I use authProtocol = "SHA" privProtocol = "AES" NUT will not start and claims to not find a suitable device. Any ideas how to fix this?
  10. OK, got it. Lets say I wanted to have a swap file of 128GB, just in case... What is the recommended route to go? Put swap on HDD oder SSD? For that I would need to reformat a drive or the cache pool from BTRFS to XFS. If the best way would be to take the cache SSD pool from BTRFS to XFS, what is the recommended strategy to get from a) to b)? Note: I have empty drives lingering in the array, would be easy to reformat one from BTRFS to XFS, but I guess putting swap on the cache pool is better?
  11. I just realize that my unraid machine does not have swap enabled. I did not find much information here in the forum. Does unraid by default use a swap file or do I need to enable that manually somewhere? Of course, I can do it using this plugin, I am just wondering if there is a standard mechanism in the unRAID UI for swap and I just didn't find it.. I also found that swap files are not supported on BTRFS on Linux earlier than version 5, which is a pity, because all my drives use BTRFS, also the SSD Cache drives. I never had memory problems, but with recently more docker containers, some of which use a a few hundred MB RAM, I seem to reach a limit, CPU and Storage get very busy when memory fills up. I first thought that this is caused by swap thrashing, and then saw that the machine does not use swap at all.... What is the recommended route to go? Put swap on HDD oder SSD? For that I would need to reformat a drive or the cache pool from BTRFS to XFS. Am I overthinking this? Will unraid select the best location and create a swap file when it finds a non-BTRFS formatted drive? If the best way would be to take the cache SSD pool from BTRFS to XFS, what is the recommended strategy to get from a) to b)? Note: I have empty drives lingering in the array, would be easy to reformat one from BTRFS to XFS, but I guess putting swap on the cache pool is better?
  12. Good to know about the blacklist properties, did not find that information elsewhere. That could explain why there is no replace key button. So my only option is to use another USB drive I have, which may not be reliable, until I receive a new drive.
  13. Yes of course, have you read my posting? To be very clear: the button "Replace Key" is _not_ there, because it is saying the drive is blacklisted (as probably every drive you used before). There is also no button "Purchase Key" on the registration tab. The UI clearly says one should contact support. So if someone would know how to get a trial or how to transfer the license to a previously blacklisted drive, that would help.
  14. The FI was thrown, electrical event. Sandisk sticks tend to die when that happens. Lucky you you never had the problem. Its the third time it happens to me. How do you get a trial license? I put in a different stick but there was no optione for getting one, I guess because the system knowns that I had a pro license before. Just a guess. Of course i tried to get a trial to bridge the time being. It seems to be non-obvious if you used a pro license before. And finally, why would I buy a new license for the same server each time a flash drive is exchanged to to failure or because it was and intermediary solution? that does not seem to make much sense to me, but may it does to you. An intermediary solution is always needed if you order a replacement flash drive but you need to run the server until that arrives.
  15. My boot flash drive died on me, so i tried to use an old drive I had used before. Since it was used before with a pro key, it is now blacklisted, as the key was transferred to a different drive. This would not be a problem if there would be a possibility to actively use a blacklisted drive for up to1 week. That would be enough time for Limetech to respond and provide a new key for the blacklisted drive, or to unlist it. As it always is these things happen late just before a weekend. I am waiting for a solution since 2 days with no way to start my array. that means also that I cannot use my docker containers and VMs, where my home automation is running. So clearly not a situation where a waiting time is appreciated at all. It is also not at all clear to me why there is no grace period for self-reviving a blacklisted drive and a self-service interface (at least I was not able to find it and I looked for something like that) that lets you invalidate a drive and transfer the license to any other drive, even if it is an old blacklisted drive. I do remember that there was a self service interface and one could to that once a year. however, no idea where to find that. At least it is not linked on the unRAID web UI for a blacklisted flash drive! Is there a faster way than waiting for the Limetech support (except of course than paying for another license, since there is no refund option!)?