andyd

Members
  • Posts

    105
  • Joined

  • Last visited

Everything posted by andyd

  1. I ended up disabling Docker to delete the datasets - not an obvious thing to know but I'll know going forward
  2. That did address it ... thanks! Can't communicate with the server but I think this has to do with Opnsense where the nut server lives. I'll have to check on that
  3. Hi I'm trying to set this up for the first time - setting this up as slave... 1. There is a small red hand near the monitoring password - why is that the case? 2. If I try to enable the service I see the below. Not sure what to make of any of the lines Apr 19 12:26:24 HomeServer rc.nut: Writing NUT configuration... Apr 19 12:26:27 HomeServer rc.nut: Updating permissions for NUT... Apr 19 12:26:27 HomeServer rc.nut: Checking if the NUT Runtime Statistics Module should be enabled... Apr 19 12:26:27 HomeServer rc.nut: Disabling the NUT Runtime Statistics Module... Apr 19 12:26:28 HomeServer rc.nut: fopen /var/run/nut/upsmon.pid: No such file or directory Apr 19 12:26:28 HomeServer rc.nut: Could not find PID file to see if previous upsmon instance is already running! Apr 19 12:26:28 HomeServer rc.nut: Unable to use old-style MONITOR line without a username Apr 19 12:26:28 HomeServer rc.nut: Convert it and add a username to upsd.users - see the documentation Apr 19 12:26:28 HomeServer rc.nut: Fatal error: unusable configuration
  4. Hi, I am trying to delete datasets for Apps that no longer exist. I get the following error... "cache/appdata/mariadb-official: Operation not permitted" Is there a way to get around this error?
  5. Not sure if I would consider this an unclean shutdown but can someone correct me if I'm wrong. Do I have to turn off the array first before shutting down in Gui? Also would a running VM not shut down before shutting down Server via Gui cause this?
  6. I haven't had any issues since I posted this which is the longest it's gone. I didn't swap out anything. What I did do is disable app backup which I had forgotten about. Not 100% sure if that was the issue but after disabling that so it's not running at the same time as the zfs backup - seems ok now 🤷‍♂️
  7. I ended up pausing the rebuild (knowing it would restart) and shutdown. After that, it started up successfully
  8. Ah dockers did start up but VM refuses to. homeserver-diagnostics-20231229-1432.zip
  9. Or maybe I should ask why would these services fail to start? Since it's in the middle of data rebuilding I can't turn off server and back on.
  10. I went through this process... https://docs.unraid.net/legacy/FAQ/parity-swap-procedure/ 1. Did not realize it said `legacy` 2. Seems overkill With that said, after copying over parity it's now building on the demoted parity. I've done data rebuilds before and the array was still usable along with docker / vms. Why would it not be in this case?
  11. I ran the extended test and it completed without error. Attached is the smart report homeserver-smart-20231211-1820.zip
  12. As far as I can tell there are 0 issues with this drive. No smart errors before and after converting it to zfs for the sole purpose of backing up app data. since zfs formatting, drive has been disabled two times with successful rebuilds. Now on third disabling before zfs formatting this drive has been running as a data drive for several years with no issues. any reason why it would be an issue now? The drive as it stands only handles zfs backup. No data is being written to it otherwise
  13. Ok thanks - I did the steps without going into maintenance mode and drive is fine again
  14. All drives are on a sas card. I would expect other drives to have issues as well if it was the card. does it make sense to turn off / on server to see if it corrects itself?
  15. I recently set up a drive as zfs for zfs snapshots. That was working fine till this morning. I am confused by the messaging because the drive seems to be ok but unraid doesn’t think it is. How do I handle? I ran smart check and no errors
  16. Ah sorry at this point I got beyond that issue. For some reason, reformatting the drive to zfs again had it show up in the plugin. I ended up reformatting because I was getting weird errors that there was insufficient space. I'm all good now
  17. Running into the same issue RealActorRob. Not sure if it’s safe to shutdown? In my case, it was because I had a window open to the web terminal
  18. Could it be how I moved over the data? This drive has been sitting around for a while. Yesterday I decided to set up zfs snapshotting. I moved a folder I had another drive (nextcloud) to this drive using unbalance. Not sure if that is the reason? is there some way data should be moved to the drive? Do I have to manually create datasets for it to appear in zfs master
  19. Looks like it's there. "apps" which is not showing up under the plugin
  20. Thanks for the plugin! I set this up today - it picks up one of the pool drives but I have two formatted with zfs. Any reason the one would be ignored
  21. Ok got it. I used the remote ip option and immediately see logs now. Thanks!
  22. Ok I enabled the last option but I'm a little confused based on the help box that appears when clicking it - what's the point of the top part if I need the bottom as well? Is it that whatever is recorded on the flash drive will be moved to local syslog server periodically?
  23. I have Syslog set up as shown below. If I check the share, there is nothing in it. I have this share on a ssd - that ssd has a logs folder, also nothing in it. Is there an issue with the configuration?