dgriff

Members
  • Posts

    49
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed
  • Location
    Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dgriff's Achievements

Rookie

Rookie (2/14)

6

Reputation

  1. Grandfathered will continue to get all upgrades free for life. It’s been stated in the vlog.
  2. The FUD from unraid competitors and fans of the competition is only just beginning. Look for a huge disinformation campaign.
  3. Reddit is a slum now. Don’t care about them at all anymore. I suggest you delete them as well. You’ll be happier.
  4. Anyone know what's going on with the changedetection.io docker? Been showing as unavailable for a few days now. I see another option at Linuxserver, not sure if it's better to just switch over?
  5. Seems to be acting the same. I think it may be Firefox related (even with all addons disabled). I don't get the same message in Chrome or Safari.
  6. Even more info... When the array mounts, I get a prompt from my browser: "To display this page, Firefox must send information that will repeat any action (such as a search or order confirmation) that was performed earlier". If I click Resend, the array fails to mount, with a "stale configuration" error. If I click "Cancel", everything mounts properly. Maybe just a HTML bug?
  7. Was just going through some array maintenance and noted that one of my drives (one of the oldest) was encrypted back in the LUKS1 days, and still had the old header on it. I'm assuming there's no way to upgrade the drive without moving everything off, reformatting, and putting it back (or just adding a new array drive with the more modern encryption and doing a single and a reformat of the old one). Are there any significant reasons to bother with this/worry about it, including security issues, speed, or relaibility? I just used the backup headers option to backup the headers from all of my encrypted drives, and the 1M on the old drive vs. the 16MB for the new ones was the first thing that caught my eye to reveal the difference.
  8. Just a thought if this helps track it down - at one point I had 18 array drives assigned, but there were a bunch of old, slow, small ones that were empty which I later decided to take out through "new configuration", so slots 13,14,15 and 18 showed up on the array as "unassigned" (which is proper), but it was showing "array of 20 devices" on the first reboot (stale configuration), but when it rebooted, the "unassigned" slots were gone and the list of array drives shows 2 x parity, then 1-12, skips to 16 and 17 and correctly shows "Array of sixteen devices" at the bottom. When I did the new configuration, I re-assigned the drives using their original array numbers, rather than collapsing them down to sixteen, which left unassigned gaps in the array layout (13,14,15,18), but I figured that was still safe.
  9. Had this problem with RC3 and RC4, where the first reboot after update results in a "stale configuration" when restarted, so the array never starts (only offers reboot and shutdown buttons on the UI). Fortunately on second reboot it seems to work OK.... But why? Had this issue on RC3, but forgot to capture a diagnostic before I rebooted, so when it happened this time, I remembered, and here it is! diagnostics-20220319-1715.zip
  10. Great plugin, I ran the extended tests and it found a number of duplicate files on different array drives (probably from mistakenly cancelling an unbalance operation - would be great if the unbalance plugin really had a pop-up warning about duplicates left behind if you cancel, or just an option to clean-up the sources immediately after each move takes place rather than at the end, or leave a script behind you could run to clean-up leftovers automatically). That said, is there any way to schedule an "extended test" to automatically run once a month as opposed to the standard test on a weekly basis?
  11. Couldn't make influxdb v2.x connect properly. Running the "influx" command through the docker command prompt never connected to the database, so was unable to create databases for my applications to populate. Doing the same setup process with influx v1.7 worked fine.
  12. Hoping someone can point me in the direction of why unifi-poller isn't returning any data. Checked the logs for the unifi-poller docker and it seems to be working properly. Same with Prometheus. Grafana finds the Prometheus database, but doesn't seem to be getting any data. Unifi-poller: 2021/04/12 23:34:05.222977 updateweb.go:193: [INFO] => URL: https://192.168.1.9:8443 (verify SSL: false) 2021/04/12 23:34:05.222991 updateweb.go:193: [INFO] => Version: 6.1.71 (a44c7aa3-f9c0-4d69-9eda-6ae6b9ab3ce6) 2021/04/12 23:34:05.223005 updateweb.go:193: [INFO] => Username: unifipoller (has password: true) 2021/04/12 23:34:05.223018 updateweb.go:193: [INFO] => Hash PII / Poll Sites: false / all 2021/04/12 23:34:05.223041 updateweb.go:193: [INFO] => Save Sites / Save DPI: true / false (metrics) 2021/04/12 23:34:05.223069 updateweb.go:193: [INFO] => Save Events / Save IDS: false / false (logs) 2021/04/12 23:34:05.223086 updateweb.go:193: [INFO] => Save Alarms / Anomalies: false / false (logs) 2021/04/12 23:34:05.223099 updateweb.go:193: [INFO] => Save Rogue APs: false 2021/04/12 23:34:05.223239 logger.go:17: [INFO] InfluxDB config missing (or disabled), InfluxDB output disabled! 2021/04/12 23:34:05.223281 logger.go:17: [INFO] Loki config missing (or disabled), Loki output disabled! 2021/04/12 23:34:05.223378 logger.go:15: [INFO] Internal web server disabled! 2021/04/12 23:34:05.226000 logger.go:17: [INFO] Prometheus exported at http://0.0.0.0:9031/ - namespace: unifipoller Prometheus: level=info ts=2021-04-14T06:40:48.723Z caller=main.go:423 build_context="(go=go1.16.2, user=root@a67cafebe6d0, date=20210331-11:56:23)" level=info ts=2021-04-14T06:40:48.723Z caller=main.go:424 host_details="(Linux 5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021 x86_64 f7ef30f14488 (none))" level=info ts=2021-04-14T06:40:48.723Z caller=main.go:425 fd_limits="(soft=40960, hard=40960)" level=info ts=2021-04-14T06:40:48.723Z caller=main.go:426 vm_limits="(soft=unlimited, hard=unlimited)" level=info ts=2021-04-14T06:40:48.732Z caller=web.go:540 component=web msg="Start listening for connections" address=0.0.0.0:9090 level=info ts=2021-04-14T06:40:48.733Z caller=main.go:795 msg="Starting TSDB ..." level=info ts=2021-04-14T06:40:48.737Z caller=tls_config.go:191 component=web msg="TLS is disabled." http2=false level=info ts=2021-04-14T06:40:48.737Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1618293830375 maxt=1618315200000 ulid=01F362TFS260HS5A3H217T57VK level=info ts=2021-04-14T06:40:48.738Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1618315205375 maxt=1618336800000 ulid=01F36QDNGNPE73SFT2E75GSR57 level=info ts=2021-04-14T06:40:48.739Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1618358405375 maxt=1618365600000 ulid=01F37553S58XXY9EA7T0PPH3XG level=info ts=2021-04-14T06:40:48.739Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1618365605375 maxt=1618372800000 ulid=01F37C0V14XS97P35V8V3RDRMW level=info ts=2021-04-14T06:40:48.740Z caller=repair.go:57 component=tsdb msg="Found healthy block" mint=1618336805375 maxt=1618358400000 ulid=01F37C0V77EETHRZVDG6CSDM1Z level=info ts=2021-04-14T06:40:48.779Z caller=head.go:696 component=tsdb msg="Replaying on-disk memory mappable chunks if any" level=info ts=2021-04-14T06:40:48.782Z caller=head.go:710 component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.115569ms level=info ts=2021-04-14T06:40:48.782Z caller=head.go:716 component=tsdb msg="Replaying WAL, this may take a while" level=info ts=2021-04-14T06:40:48.791Z caller=head.go:742 component=tsdb msg="WAL checkpoint loaded" level=info ts=2021-04-14T06:40:48.851Z caller=head.go:768 component=tsdb msg="WAL segment loaded" segment=8 maxSegment=12 level=info ts=2021-04-14T06:40:48.898Z caller=head.go:768 component=tsdb msg="WAL segment loaded" segment=9 maxSegment=12 level=info ts=2021-04-14T06:40:48.949Z caller=head.go:768 component=tsdb msg="WAL segment loaded" segment=10 maxSegment=12 level=info ts=2021-04-14T06:40:48.983Z caller=head.go:768 component=tsdb msg="WAL segment loaded" segment=11 maxSegment=12 level=info ts=2021-04-14T06:40:48.985Z caller=head.go:768 component=tsdb msg="WAL segment loaded" segment=12 maxSegment=12 level=info ts=2021-04-14T06:40:48.985Z caller=head.go:773 component=tsdb msg="WAL replay completed" checkpoint_replay_duration=9.422595ms wal_replay_duration=194.019563ms total_replay_duration=206.643012ms level=info ts=2021-04-14T06:40:48.991Z caller=main.go:815 fs_type=65735546 level=info ts=2021-04-14T06:40:48.991Z caller=main.go:818 msg="TSDB started" level=info ts=2021-04-14T06:40:48.991Z caller=main.go:944 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml level=info ts=2021-04-14T06:40:48.993Z caller=main.go:975 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=2.101765ms remote_storage=5.596µs web_handler=1.603µs query_engine=2.343µs scrape=575.961µs scrape_sd=86.107µs notify=45.668µs notify_sd=66.982µs rules=3.986µs level=info ts=2021-04-14T06:40:48.993Z caller=main.go:767 msg="Server is ready to receive web requests." Any ideas? I have Grafana working properly with PostgreSQL for Teslamate.
  13. Well not specifically windows, any network user is the same. Can't do anything with my Mac over the network either as remote connections don't have the permissions. How would I fix it?