luzfcb

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

luzfcb's Achievements

Noob

Noob (1/14)

0

Reputation

  1. @Rysz It possible to generate a release with debug symbols, by using https://github.com/networkupstools/nut/pull/2310/files ? I'm currently getting "Segmentation fault" when starting the driver for my UPS. It uses SEC protocol via gametronic driver. The manufacturer does not officially support nut, but he told me that I should configure the serial connection to: - Baud rate: 2400; - Data bits: 8; - Stop bits: 1; - Parity: None; - Minimum request interval: 600ms; This is my manual attempt to debug the error: root@f:/tmp/aaa# /usr/sbin/upsdrvctl -FF -DDDDD -u root start Network UPS Tools - UPS driver controller 2.8.2 0.000000 [D2] If you're not a NUT core developer, chances are that you're told to enable debugging to see why a driver isn't working for you. We're sorry for the confusion, but this is the 'upsdrvctl' wrapper, not the driver you're interested in. Below you'll find one or more lines starting with 'exec:' followed by an absolute path to the driver binary and some command line option. This is what the driver starts and you need to copy and paste that line and append the debug flags to that line (less the 'exec:' prefix). Alternately, provide an additional '-d' (lower-case) parameter to 'upsdrvctl' to pass its current debug level to the launched driver, and '-B' keeps it backgrounded. 0.000041 [D1] upsdrvctl commanding all drivers (1 found): (null) 0.000045 [D1] Starting UPS: nhs 0.000052 [D2] 1 remaining attempts 0.000056 [D2] exec: /usr/libexec/nut/gamatronic -FF -a nhs -u root 0.000059 [D1] Starting the only driver with explicitly requested foregrounding mode, not forking Network UPS Tools - Gamatronic UPS driver 0.05 (2.8.2) Connected to UPS on /dev/ttyACM0 baudrate: 2400 UPS: NHS Sistemas de Energia PDV Senoidal 1500 VA Segmentation fault I'm wondering if a version with debug symbos included might make it easier to discover the cause of errors and thus perhaps obtain minimal information so that I can open an issue in NUT
  2. NUT 2.8.2 was released almost 1 hour ago: https://github.com/networkupstools/nut/releases/tag/v2.8.2 https://github.com/networkupstools/nut/blob/master/NEWS.adoc#release-notes-for-nut-282---whats-new-since-281 https://github.com/networkupstools/nut/blob/master/UPGRADING.adoc#changes-from-281-to-282 I'm looking forward to when 2.8.2 is available in Unraid NUT plugin because maybe the gamatronic driver will works without raising a weird segmentation fault.
  3. Hello, on my Unraid 6.12.8, I have 2 1TB nvme SSDs from the same manufacturer, acting as cache using BTFS I recently noticed in the logs that apparently one of the SSDs might be dying. Mar 13 18:35:13 f kernel: btrfs_end_super_write: 282 callbacks suppressed Mar 13 18:35:13 f kernel: BTRFS warning (device nvme0n1p1): lost page write due to IO error on /dev/nvme0n1p1 (-5) Mar 13 18:35:13 f kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 326893147, rd 40441477, flush 23342600, corrupt 3, gen 0 Mar 13 18:35:13 f kernel: BTRFS error (device nvme0n1p1): error writing primary super block to device 1 Mar 13 18:35:13 f kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 326893148, rd 40441477, flush 23342600, corrupt 3, gen 0 Mar 13 18:35:13 f kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 326893149, rd 40441477, flush 23342600, corrupt 3, gen 0 Mar 13 18:35:13 f kernel: BTRFS warning (device nvme0n1p1): lost page write due to IO error on /dev/nvme0n1p1 (-5) Mar 13 18:35:13 f kernel: BTRFS error (device nvme0n1p1): error writing primary super block to device 1 When I click on the device name and then go to the Self-Test tab, and then click on Download, a .zip file is downloaded and inside it, there is a TXT file, that contains the result of the SMART analysis. It turns out that only on 1 of the SSDs, the TXT file contains the SMART test report, on the other SSD the TXT file only contains smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.1.74-Unraid] (local build) Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org Smartctl open device: /dev/nvme0 failed: No such device Questions: 1 - Is this a bug? 2 - Why apparently only 1 of the SSDs is being used? ( Cache 2 is used but the ) 3 - In the Self-Test tab of Cache 2, when clicking SMART error log, the error is displayed: Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 0 106 0 0x0001 0x4004 - 0 0 - Invalid Field in Command Why is the error not counted in "Pool Devices" -> "Errors" column? Some extra information: root@f:~# cat /etc/unraid-version version="6.12.8" root@f:~# mount | grep nvm /dev/nvme0n1p1 on /mnt/cache type btrfs (rw,noatime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/) root@f:~# ls /dev/ | grep nvm nvme1 nvme1n1 nvme1n1p1 Cache 1: Cache 2: .
  4. First of all, I'm new to Unraid and I don't know if this functionality exists or if there's a better way to do it. Problem: I have a maybe a bit complex setup. On Unraid server I have Nextcloud AIO installed from the official docker image and I would like to make it accessible over the internet. The company that provides my internet connection does not give me any access to the router (equipment that converts a fiber optic signal to an Ethernet cable), and because of this I cannot allow access to external ports or do port forwarding, etc. . Also, it gives me a dynamic IP address, which tends to change frequently. One of the ways around this is by using VPN. I then bought a domain name, rented a low latency VPS server for my region, and created a VPN network using Netmaker installed via docker. On my VPS server, I have Caddy as a reverse proxy that forwards requests to the Netmarker client host which effectively gives me access to the exposed ports of the Unraid server. All of this has been working fine, that is, I have no problems accessing my Nextcloud server via my domain. Although the setup worked fine, there is a problem: When I am indoors accessing the same network as the Unraid server, when the Nextcloud mobile app uploads something, the request is sent via the internet to my VPS server and then via the network VPN goes back to Unraid server. Since I pay data traffic on the VPS server and my internet connection, this is not a good thing. The solution for this was: 1 - Create a new container with Caddy with a fixed IP that reverse proxy directly to the Nextcloud containers without going through the VPN > I have a cron task that from time to time enters my VPS server and copies the Caddy certificates from the VPS to the Caddy running in Unraid and keeps the Unraid Caddy in sync. 2 - Install Pi-hole and include a dns record to resolve my domain address directly to the Caddy IP running in Unraid. This is the configuration I have now for Caddy running on Unraid: Nextcloud AIO creates its own docker network named nextcloud-aio. To make Caddy in Unraid have access to Nextcloud's internal network (aka nextcloud-aio docker network), it is necessary manually execute docker network connect nextcloud-aio caddy-local-proxy Connecting the network manually works fine, however the configuration does not resist the way Unraid recreates the containers, that is, if you click on edit the caddy-local-proxy container for example and apply it, Unraid will delete the previous container and create it again, however the previously connected extra networks will not be connected again. The Feature Request: One way I thought to solve this, is to include an additional step to be performed automatically by Unraid when Unraid starts and/or recreates the containers: Create a new UI way for the user to add via the dropdown selector (something as shown in the Network Type dropdown) which additional networks they want to connect to. From that list, run docker network connect Currently, I have a cron job to check this from time to time and ensure that a given container has access to a given network, but I think if that functionality or better functionality doesn't exist, it might be a nice addition to Unraid. #!/bin/bash # Define the list of container-network pairs container_network_pairs=( "caddy-local-proxy nextcloud-aio" # Add more pairs as needed ) # Wait until Docker daemon is available while ! docker info >/dev/null 2>&1; do echo "Waiting for Docker daemon to start..." sleep 1 done # Function to check and connect container-network pairs check_and_connect_network() { local container="${1}" local network="${2}" # Wait until the container is running while ! docker container inspect -f '{{.State.Running}}' "${container}" >/dev/null 2>&1; do echo "Waiting for container '${container}' to start..." sleep 1 done # Check if the container is connected to the network if docker container inspect "${container}" | jq -e --arg network "${network}" '.[0].NetworkSettings.Networks[$network]' >/dev/null 2>&1; then echo "Container '${container}' is already connected to the '${network}' network." else echo "Container '${container}' is not connected to the '${network}' network. Connecting..." if docker network connect "${network}" "${container}" >/dev/null 2>&1; then echo "Successfully connected '${container}' to the '${network}' network." else echo "Failed to connect '${container}' to the '${network}' network." exit 1 fi fi } # Iterate over the container-network pairs and check/connect each pair for pair in "${container_network_pairs[@]}"; do container="$(echo "${pair}" | awk '{print $1}')" network="$(echo "${pair}" | awk '{print $2}')" check_and_connect_network "${container}" "${network}" done exit 0
  5. Question about Dynamix System Autofan: Is there a way to fully disable turn off a FAN? I have a set of Fans to cool the HDDs. They also indirectly help to cool other components. When the HDDs sleep, the auto fan turns the FANs off and I would like instead to just slow them down instead of turning them off.
  6. Hello, I'm learning how VPN works with Unraind and I would like to know if there's any way to undo any configuration I've done in VPN Manager in order to get it back to the default initial configuration. I missed a delete button or something. If the button exists, I couldn't find it instinctively.
  7. Is there any chance that Unraid 6.12 stable release will include bash autocomplete config for docker?
  8. Hello, I'm trying to build my homemade server and I think the integration with docker is very good, however, I miss the docker autocomplete properly installed and configured when I need to do some things manually in docker via terminal. Unraid 6.12 RC5 includes docker 20.10.23, and the bash completion file for that version is https://raw.githubusercontent.com/docker/cli/v20.10.23/contrib/completion/bash/docker Note: I expect that the final version Unraid 6.12 include at least the latest docker version at this time, that is, docker v23.0.6
  9. Great. I asked the question because there is no information about Memtest86+ in the Release Notes for 6.12.0-rc1 and 6.12.0-rc2
  10. @limetech Will the Unraid 6.12 also update the Memtest86+ version to the latest version available? The Memtest86+ (v5.x) version that is shipped with Unraid 6.11 is not capable of identifying the manufacturer of my RAM memories (Netac); The latest version of Memtest86+ (v6.10) already correctly identifies it, In other words, Memtest86+ 6.x does more things than 5.x series from https://github.com/memtest86plus/memtest86plus#origins :