arghhh40k

Members
  • Posts

    16
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

arghhh40k's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Thank you, if you have extra time, it's a low priority for me since the SNMP is 90% of what I am looking for. I just am pushing the data to influx and grafana and my old dashboards when I was using USB aren't quite working. Now the server and UPS are not in the same spot it's not so easy to use USB. I have manually mathed the efficiency and am ignoring VA for Watts since it's only one phase of two but the Watts is both. I also noticed the power factor is wonky as well but it seems like first world problems.
  2. I have been using the SNMP but was trying to get more data since the SMNP mibs are extremely old in the driver. Hopefully the NUT team can update them with the 2023 version soon.
  3. Poking around the documentation the Eaton 9PX should work with the netxml-ups but selecting the driver gives this error in the log: /usr/libexec/nut/netxml-ups: error while loading shared libraries: libneon.so.27: cannot open shared object file: No such file or directory Is this a config issue or driver? Tried both the manual and ui mode with manual: driver = netxml-ups port = http://192.168.x.x:80 UI: other: netxml-ups port = http://192.168.x.x:80 I did both http and https with 80 and 443.
  4. It is missing VA and efficiency along with a lot of the newer UPS status like on high efficiency mode and a lot of new alarms. Looks like the are actively updating the MIBs with the release of the new M2 and M3 cards. Thank you for the support, I will log it on github with the new MIBs. Github issue #2372. Edit: I just noticed the ups.power is incorrect and only showing 1 phase of the split phase and not the total output of the UPS. 28% of 5520 is 1545w not 627w
  5. Hi, I got my Eaton 9PX ups working over SNMP v3 but it's using a old MIB. I have the new supported MIBs from Eaton. How do I get NUT to use them?
  6. Have upgrade to 6.9 and changed my docker from image to directory. Created a new cache only share called Docker, recreated all the dockers, everything working fine. Mover is trying to move the files in the docker directory even though it's cache only and filling up my log with each file and no medium found. What is going on? I only have a single cache pool, called cache. storage-diagnostics-20210306-1714.zip
  7. I didn't do the wipefs, but swapping cache 1 and 2 seemed to have fixed it. I have since removed more drives and consolidated to the 14tb and 8tb only.
  8. The 14TB drive is new. I installed it but left it unassigned as I was shrinking the array and didn't want cache. Once my copy is done I will swap parity and see what happens. My parity currently isn't synced.
  9. root@Storage:/boot/utt# blkid /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="squashfs" /dev/sda: LABEL="UNRAID" UUID="6E54-85DA" TYPE="vfat" /dev/sdc1: UUID="edd752b1-6589-44f7-9199-2a45e02e4a0d" UUID_SUB="3c51ff35-add3-45a0-8397-097047d495a7" TYPE="btrfs" /dev/sdd1: UUID="c86d8c19-46c1-4d76-9bda-46d21721f8bb" UUID_SUB="a3cb6dcd-c9f6-42a8-a5f7-30b35edd1ee7" TYPE="btrfs" /dev/sdk1: UUID="9e27c44a-246f-4651-aa55-11ee8b7bd7fd" UUID_SUB="a3111404-2110-4b26-ae60-214324711e5a" TYPE="btrfs" PARTUUID="143360e5-dc12-46e3-a8bf-076fd19353af" /dev/sdl1: UUID="5301a658-9d38-411a-8fdc-2030bf632b8b" UUID_SUB="0b73e0c3-6d28-4393-8816-3c609c514857" TYPE="btrfs" PARTUUID="57264887-16ad-4fac-939a-da6a4b60a135" /dev/sdm1: UUID="b0174dd1-02de-45cc-a62f-7156c45fd721" UUID_SUB="14db7944-3f3e-4b1a-9d45-002a7b07a2b4" TYPE="btrfs" PARTUUID="6a11795b-e79d-487a-9abf-6be94d691a98" /dev/sdq1: UUID="0b3060f9-e98b-447e-8f2e-fc0eec2c7005" TYPE="xfs" PARTUUID="6f612d57-8d49-46e9-bc99-2acf8bc1f107" /dev/sdn1: UUID="ae7c33cc-8e6d-445a-93ae-e0d0fa7ff272" UUID_SUB="be5247b8-2f8b-4fe3-ad59-82238602524b" TYPE="btrfs" PARTUUID="d81e346f-37f3-4619-8ff5-a856b8388119" /dev/sdp1: UUID="9a2d9272-75fe-4292-aaad-adfb68c1d7f1" UUID_SUB="8f9c82ef-b8d0-4d55-8348-2031923f25b6" TYPE="btrfs" PARTUUID="023a392b-e06b-40ad-98b7-338dd062fe65" /dev/sdo1: UUID="6cb7991e-ce0b-45e6-b407-00aa10f999dd" UUID_SUB="f321d54b-a852-40d2-bc7f-657fec436763" TYPE="btrfs" PARTUUID="5c9fcaf1-b799-4612-99ba-10c9b41f5b4f" /dev/sds1: UUID="f30f805f-a050-4a9f-b7f5-8a363e1fe1e9" UUID_SUB="d990338e-6962-468f-8897-be28e5a04fde" TYPE="btrfs" PARTUUID="2cd3e71b-6e41-406e-80ec-7a0a16bd2369" /dev/sdt1: UUID="d558f5c3-d93d-42be-ac23-747fb7341d64" UUID_SUB="2a3a8213-645c-4864-bc46-6d7354a37bce" TYPE="btrfs" PARTUUID="867df603-6b94-4625-8adb-925e45b3fd19" /dev/sdg1: UUID="c28cd4b4-a292-456f-8999-1e770738bc42" TYPE="xfs" PARTUUID="e38c72db-2aa9-4634-87e6-bff7d66e3692" /dev/sde1: PARTUUID="72df5ea5-6b2b-4c9b-a2e2-6bc2685c61d3" /dev/sdu1: UUID="9d773758-3c04-4abe-8c0a-50c995e088a3" TYPE="xfs" PARTUUID="4929371a-95e4-4edc-a6ed-d59f3c0d79bf" /dev/sdh1: UUID="48328b23-5619-473e-ae9f-1814f290d145" TYPE="xfs" PARTUUID="d0e7c5c8-7e36-43b8-8c0d-c761b35837f5" /dev/md1: UUID="c28cd4b4-a292-456f-8999-1e770738bc42" TYPE="xfs" /dev/md2: UUID="48328b23-5619-473e-ae9f-1814f290d145" TYPE="xfs" /dev/md3: UUID="6cb7991e-ce0b-45e6-b407-00aa10f999dd" UUID_SUB="f321d54b-a852-40d2-bc7f-657fec436763" TYPE="btrfs" /dev/md4: UUID="d558f5c3-d93d-42be-ac23-747fb7341d64" UUID_SUB="2a3a8213-645c-4864-bc46-6d7354a37bce" TYPE="btrfs" /dev/md5: UUID="f30f805f-a050-4a9f-b7f5-8a363e1fe1e9" UUID_SUB="d990338e-6962-468f-8897-be28e5a04fde" TYPE="btrfs" /dev/md6: UUID="0b3060f9-e98b-447e-8f2e-fc0eec2c7005" TYPE="xfs" /dev/md7: UUID="9d773758-3c04-4abe-8c0a-50c995e088a3" TYPE="xfs" /dev/md8: UUID="9e27c44a-246f-4651-aa55-11ee8b7bd7fd" UUID_SUB="a3111404-2110-4b26-ae60-214324711e5a" TYPE="btrfs" /dev/md9: UUID="b0174dd1-02de-45cc-a62f-7156c45fd721" UUID_SUB="14db7944-3f3e-4b1a-9d45-002a7b07a2b4" TYPE="btrfs" /dev/md10: UUID="ae7c33cc-8e6d-445a-93ae-e0d0fa7ff272" UUID_SUB="be5247b8-2f8b-4fe3-ad59-82238602524b" TYPE="btrfs" /dev/md11: UUID="9a2d9272-75fe-4292-aaad-adfb68c1d7f1" UUID_SUB="8f9c82ef-b8d0-4d55-8348-2031923f25b6" TYPE="btrfs" /dev/md12: UUID="5301a658-9d38-411a-8fdc-2030bf632b8b" UUID_SUB="0b73e0c3-6d28-4393-8816-3c609c514857" TYPE="btrfs" /dev/sdi1: UUID="edd752b1-6589-44f7-9199-2a45e02e4a0d" UUID_SUB="498bb681-73a2-4c70-b1e0-de62361e423c" TYPE="btrfs" /dev/sdj1: UUID="edd752b1-6589-44f7-9199-2a45e02e4a0d" UUID_SUB="b7ee10ae-c10a-4ca3-b92c-3486b1bbe1c5" TYPE="btrfs" /dev/loop2: UUID="55cbcfff-fa93-46e5-9d76-35b56d8bcf0c" UUID_SUB="b0e8f202-2efb-422c-90b7-2338e162a117" TYPE="btrfs" /dev/sdf1: PARTUUID="12af5b9b-67eb-4815-8097-6c73de1b5fe1"
  10. Got larger array disks and currently moving data off older small disks to new larger ones. After a disk is empty I stop the array and remove the drive and do a new config keeping the cache settings. I have no party at the moment. Twice now I have gotten the no UUID issue. I can manually mount the cache, copy data off and let it reformat. What is the fix here, this is the first time I have seen this behavior. Sep 2 13:34:04 Storage kernel: BTRFS info (device md12): has skinny extents Sep 2 13:34:05 Storage emhttpd: shcmd (2603): btrfs filesystem resize max /mnt/disk12 Sep 2 13:34:05 Storage root: Resize '/mnt/disk12' of 'max' Sep 2 13:34:05 Storage kernel: BTRFS info (device md12): new size for /dev/md12 is 4000786976768 Sep 2 13:34:05 Storage emhttpd: shcmd (2604): mkdir -p /mnt/cache Sep 2 13:34:06 Storage emhttpd: /mnt/cache mount error: No pool uuid Sep 2 13:34:06 Storage emhttpd: shcmd (2605): umount /mnt/cache Sep 2 13:34:06 Storage root: umount: /mnt/cache: not mounted. Sep 2 13:34:06 Storage emhttpd: shcmd (2605): exit status: 32 Sep 2 13:34:06 Storage emhttpd: shcmd (2606): rmdir /mnt/cache Sep 2 13:34:06 Storage emhttpd: shcmd (2607): sync Sep 2 13:34:06 Storage emhttpd: shcmd (2608): mkdir /mnt/user0 Sep 2 13:34:06 Storage emhttpd: shcmd (2609): /usr/local/sbin/shfs /mnt/user0 -disks 8190 -o storage-diagnostics-20190902-2044.zip
  11. It's currently in production, I will have a chance in a few hours. The ECC memory errors as far as I can tell are coming from the patrol scrub and being corrected.
  12. Looking through my logs and nerd tools it seems I have a failing SSD cache disk and a bad stick of Ram at slot 11. I am not super familiar with the hardware side of linux so was hoping anyone with some more experience can take a look. I plan on doing a memtest on the next downtime, is there anything else to look at? This is a Dell R320 with 48GB of ECC ddr3. storage-diagnostics-20170228-0657.zip
  13. I did a new config to remove some old empty array drives and when assigning them based on the screen shot the SSD will go into cache slot 1 but the other drive won't go into any cache slot. It changes from unassigned to no device. The BTRFS cache will not mount with only 1 drive. I have a PRO key so it's not number of slots. I was running without parity previous and just added a parity drive, now I have 6 data and 1 party.