hermy65

Members
  • Posts

    257
  • Joined

  • Last visited

Everything posted by hermy65

  1. @JorgeB You were correct as always sir, ran it again after moving that last 70GB and it looks like it completed. Diagnostics are attached. storage-diagnostics-20201222-0753.zip
  2. @JorgeB Interesting. I just manually checked and im seeing about 335GB used out of 1TB. After that i moved another 70GB off of it so literally the only thing left on cache is my plex folder and my docker image which both of those total about 240. Im running the command now, will see what happens this time.
  3. @JorgeB Having issues getting this to work, keeps saying im out of space when there is almost 600GB available storage-diagnostics-20201221-1351.zip
  4. @JorgeB Done, attached are new diagnostics plus a screenshot of the popup i got. storage-diagnostics-20201221-1047.zip
  5. @JorgeB attached storage-diagnostics-20201221-1037.zip
  6. @trurl attached storage-diagnostics-20201221-0900.zip Also, i stopped and started thee array before making the OP so i could get the exact message from the popup but it didnt come up this time. Below is a picture of my cache pool, sdc is the new drive that i put in that did not rebuild.
  7. Needed to upgrade 2 of the 4 SSDs i had in my cache pool so i swapped in the first one and it did the standard BTRFS rebuild or whatever it is supposed to do. Once that completed i stopped the array and put the second one in, this time when it came up though it did a parity check for some reason and never actually did the BTRFS rebuild or whatever it is and now when i start the array it was telling me i had a missing cache drive but referenced the new drive as the missing one even though its in the array. So my question is, what do i do so i can get the BTRFS function to do what it is supposed to do?
  8. All of a sudden today my unraid box has been seriously sluggish so i rebooted and when it came back up it took 20+ minutes to start up just my containers. My machine isnt underpowered so it shouldnt be this slow, im running dual xeon e5-2630v4s and 64gb of ram. Prior to today i had uptime of ~200 days without any issue so this is definitely not normal for my rig. Diagnostics are attached Edit: Im also seeing slowdown now when accessing/modifying existing containers too storage-diagnostics-20201124-1525.zip
  9. Perhaps mine does not have modbus support, cant seem to find it in any menu.
  10. @GilbN https://www.apc.com/shop/us/en/products/APC-Smart-UPS-C-1500VA-LCD-120V-Not-for-sale-in-Vermont-/P-SMC1500
  11. @falconexe i have my server ip configured in the apcupsd section of the telgraf config as shown # # Monitor APC UPSes connected to apcupsd [[inputs.apcupsd]] # # A list of running apcupsd server to connect to. # # If not provided will default to tcp://127.0.0.1:3551 servers = ["tcp://192.168.0.50:3551"] # # ## Timeout for dialing server. # timeout = "5s" Maybe @GilbN has a little insight since he mentioned he is using the built in apc ups daemon as well.
  12. @GilbN @falconexe - OK i moved my telegraf to host and its pulling in *some* UPS data but not all of it, also noticing some strange things with the array growth and the TX/RX numbers on my eth0 interface. in #1 - my UPS stats are not all populating. Im using the APC UPS Daemon thats built into unraid if that matters. in #2 - my annual array growth is less than my weekly/monthly in #3 -randomly i will get massive numbers on this panel, i assume its just something weird on my end but thought i would bring it to your attention. Also, random question, is there a way to convert the CPU temps to Farenheit?
  13. Ah, that may be my issue. My telegraf does not run as host so maybe thats why its not working?
  14. @falconexe finally getting this setup and the main issue im running into so far is getting my UPS data to pull in. This is what my telegraf config looks like # # Monitor APC UPSes connected to apcupsd [[inputs.apcupsd]] # # A list of running apcupsd server to connect to. # # If not provided will default to tcp://127.0.0.1:3551 # servers = ["tcp://127.0.0.1:3551"] # # ## Timeout for dialing server. # timeout = "5s" Im guessing i need to fill out the ip of my unraid server here since im using the built in APC UPS daemon under Settings -> UPS but when i try that it says no route to hose. Im guessing i need to configure something but im not sure what. Perhaps i cannot use the built in APC UPS daemon in unraid?
  15. @falconexe yeah man im still here, just trying to take in what you have done here so far. Probably going to start playing with what you have here tomorrow and see what we can accomplish here. Great work again!
  16. @falconexe Looking good! are you planning on releasing your dashboards once you are satisfied with them?
  17. @falconexe Nice find - let us know if it works for you!
  18. @falconexe / @KoNeko Perhaps we can push this idea towards one of the plugin devs here and they can make something like it or integrate it into an existing plugin they offer?
  19. Not necessarily a support question but not sure where else to put this. Is there a plugin/app of some sort that we can install in unRAID that will allow us to visualize our storage usage over time? Perhaps it would show how much additional storage we are using per month, etc? Per month storage burn might be helpful in determining when you need to get additional drives ,etc. Also, it could show you if there was a deletion of data or something of that sort as your usage would drop, etc?
  20. @johnnie.black I was able to find a potential 2nd drive that was part of it but was unable to get either to work using some of the commands from your link. Any other thoughts by chance? root@RackServ:~# mount -o degraded,usebackuproot,ro /dev/sdn1 /x mount: /x: wrong fs type, bad option, bad superblock on /dev/sdn1, missing codepage or helper program, or other error. root@RackServ:~# mount -o ro,notreelog,nologreplay,degraded /dev/sdn1 /x mount: /x: wrong fs type, bad option, bad superblock on /dev/sdn1, missing codepage or helper program, or other error. root@RackServ:~# mount -o degraded,usebackuproot,ro /dev/sdo1 /x mount: /x: wrong fs type, bad option, bad superblock on /dev/sdo1, missing codepage or helper program, or other error. root@RackServ:~# mount -o ro,notreelog,nologreplay,degraded /dev/sdo1 /x mount: /x: wrong fs type, bad option, bad superblock on /dev/sdo1, missing codepage or helper program, or other error. root@RackServ:/mnt/disk4# btrfs restore -v /dev/sdn1 /mnt/disk4/mike warning, device 5 is missing warning, device 1 is missing bad tree block 2115388817408, bytenr mismatch, want=2115388817408, have=0 ERROR: cannot read chunk root Could not open root, trying backup super warning, device 5 is missing warning, device 1 is missing bad tree block 2115388817408, bytenr mismatch, want=2115388817408, have=0 ERROR: cannot read chunk root Could not open root, trying backup super warning, device 5 is missing warning, device 1 is missing bad tree block 2115388817408, bytenr mismatch, want=2115388817408, have=0 ERROR: cannot read chunk root Could not open root, trying backup super root@RackServ:/mnt/disk4# btrfs restore -v /dev/sdo1 /mnt/disk4/mike warning, device 1 is missing bad tree block 1754592444416, bytenr mismatch, want=1754592444416, have=0 Couldn't read tree root Could not open root, trying backup super warning, device 1 is missing bad tree block 1754592444416, bytenr mismatch, want=1754592444416, have=0 Couldn't read tree root Could not open root, trying backup super warning, device 1 is missing bad tree block 1754592444416, bytenr mismatch, want=1754592444416, have=0 Couldn't read tree root Could not open root, trying backup super
  21. I lost a few files that were on my cache drive and was hoping to recover them from some older drives that used to be my cache drives but when i try to mount them in Unassigned Devices i get the following. Is there any possible way for me to recover the files i need or am i out of luck? May 21 09:03:38 RackServ kernel: BTRFS info (device sdn1): disk space caching is enabled May 21 09:03:38 RackServ kernel: BTRFS info (device sdn1): has skinny extents May 21 09:03:38 RackServ kernel: BTRFS error (device sdn1): devid 5 uuid 9203c653-08ef-4d3a-8068-fb91544bafcb is missing May 21 09:03:38 RackServ kernel: BTRFS error (device sdn1): failed to read the system array: -2 May 21 09:03:38 RackServ kernel: BTRFS error (device sdn1): open_ctree failed May 21 09:03:38 RackServ unassigned.devices: Mount of '/dev/sdn1' failed. Error message: mount: /mnt/disks/Samsung_SSD_850_EVO_500G: wrong fs type, bad option, bad superblock on /dev/sdn1, missing codepage or helper program, or other error. May 21 09:03:38 RackServ unassigned.devices: Partition 'Samsung SSD_850_EVO_500G' could not be mounted.
  22. @johnnie.black I guess im not following. Are you saying what i did earlier caused the issue?