wayner

Members
  • Posts

    536
  • Joined

  • Last visited

Everything posted by wayner

  1. No soup for me either. Can someone please get rid of this guy:
  2. FYI - I haven't run a full Smartdisk test on this drive but the errors have gone away. Perhaps that is because I have deleted almost all of the data on the drive. I have excluded the drive from my Shares, but should I keep the drive around for now, or should I just get rid of it as it can no longer be trusted?
  3. And delete them using an rm command from a bash prompt?
  4. I am a bit out of date on 6.4.1. One on my primary uses for unRAID is to act as a SageTV server so I have to ensure that the DVB edition is updated and works properly with SageTV. How do I know what other plugins are out of date. I do update the plugins but that wouldn't have helped in this instance. For all of my other plugins, other than unRAID DVB edition, the plugin version is from 2019 with the exception of Dynamix System Statistics from 2018.08.29a. When I try to update it it wants me to be on version 6.7.0rc1. Out of curiousity - how would I ever know that this Backup plugin was deprecated? Shouldn't it say that in the release notes, as in "No longer supported, move to version 2 of this plugin". If I try to delete the backups again is it still going to hang my server? Am I stuck with those files until I get rid of the drives that the files are on?
  5. Just to clarify, here is the plugin that was installed. A Backup / Restore Appdata Part of the CA family, CA Backup / Restore Appdata will either manually or on a schedule, automatically backup your docker appdata for easy restoring in case of a cache drive failure Andrew Zawadzki 2017.10.28 up-to-date
  6. So when I do that I see a ca.backup directory dated Nov 1 2017. Does that mean it is the old one? And what is the least risky way to bring my system back to life? Doing a reboot command in the ssh session?
  7. Thanks trurl - I will check that out when I get home tonight.
  8. Thanks, I will check and see but I may have to first do a power cycle on the server to get it back to life, or at least a reboot command at the CLI. I can ssh into the server but the web UI is unresponsive.
  9. Here is a post from joenitro on Aug 21, 2017 that seems to describe the exact same issue. Unfortunately there isn't much of a resolution suggested. There are also posts just prior to this with similar issues.
  10. Last week I tried to manually delete some old backups from my array and although I am not 100% sure this cause my unRAID system to become unresponsive. I also changed the options for this plugin to clean up old backups and when the backup ran earlier this morning it appears that my system is again unresponsive. I got a message that the backup completed at 4:49 am. At 5:01am I got a message that one of my VMs went down and I can't access my array nor can I access the web UI. (I am now at work until 6:30pm EDT and don't have access to do anything until then.) I thought I remembered reading that at times when you are deleting backups in can cause problems with your system. Is that still the case? Any advice on how to fix this?
  11. I have been getting errors like this: Event: unRAID Disk 1 SMART health [197] Subject: Warning [HOYLAKE] - current pending sector is 31 Description: WDC_WD20EARS-00MVWB0_WD-WMAZ20252369 (sdf) Importance: warning And this: Event: unRAID array errors Subject: Warning [HOYLAKE] - array has errors Description: Array has 1 disk with read errors Importance: warning Disk 1 - WDC_WD20EARS-00MVWB0_WD-WMAZ20252369 (sdf) (errors 58) And a daily status report like this: Event: unRAID Status Subject: Notice [HOYLAKE] - array health report [FAIL] Description: Array has 6 disks (including parity & cache) Importance: warning Parity - ST4000DM000-2AE166_WDH0ZAL0 (sde) - active 32 C [OK] Disk 1 - WDC_WD20EARS-00MVWB0_WD-WMAZ20252369 (sdf) - active 33 C (disk has read errors) [NOK] Disk 2 - ST4000DM000-1F2168_Z303256P (sdd) - active 34 C [OK] Disk 3 - WDC_WD30EFRX-68EUZN0_WD-WCC4NHZDPANY (sdg) - active 29 C [OK] Disk 4 - ST4000DM004-2CV104_ZFN15ZEA (sdb) - active 28 C [OK] Cache - ST240HM000_Z4N0013X (sdc) - active 32 C [OK] Wouldn't this indicate that disk 1 is bad? It is only a 2TB drive and I need more space so I am clearing it and moving the data to a 4TB drive that I recently installed.
  12. I have a bad hard drive that I am in the process of removing from my system. One of the parts of this process appears to be excluding the drive from shares while I move my files to other drives. When I exclude a drive from a share does that mean that I can no longer see files on that drive that are in the share? Or does it just mean that new files written to the share will not go to that drive? Or does it mean something else?
  13. Can you run Jupyter Notebook with this docker from within Pycharm? I tried following these directions here: https://www.jetbrains.com/help/pycharm/using-ipython-notebook-with-product.html But that didn't seem to work as Jupyter by default only likes to run on the localhost and I don't know how you do that in a docker. It might be nice to have a docker that just runs Jupyter Notebook (FKA IPython).
  14. Thanks - the log was there in the /boot/logs/<servername>-diagnostics-20180504-1739.zip file and I can see that the power failed at 17:18 and that shutdown was initiated at 17:34. I will change my BIOS so that the system automatically boots up when AC power is restored. I guess the only instance where that won't help is if the system shuts down and then AC power comes back before the UPS is exhausted so that the server's power supply never really loses AC power. It would be nice if a UPS had a way to send a signal to power up upon restore of AC power.
  15. By logs folder do you mean /var/log? That doesn't appear to have older logs. Thanks for the tip on the Tips and Tweaks - I have installed that. Where do you acess the archived logs? I don't see anything in the plugin to access them and I don't see a mention of this capability in the Tips and Tweaks wiki. My system did not do a parity check so I guess I had a clean shutdown. So now I will have to do a shutdown and go and change the BIOS setting so that it powers up when power is restored.
  16. Are there any plugins that would write the log to a hard drive during system shutdown? Surely I am not the only one who would want to see logs from prior sessions - wouldn't this be very useful to document problem with one's system? My unRAID server runs SageTV media server which feeds SageTV extenders at every TV so the family may not be able to watch TV until I am home and able to check things our. They might have to, horror of horrors, watch Live TV. I am not sure my kids even understand what Live TV is.
  17. My unRAID system is plugged into a UPS and has the UPS driver installed and working. I had about a 3 hour power outage today as we had a wicked wind storm here in Toronto. I think my UPS caused the system to shut down nicely, but how do I access the logs to see this as my current log just shows history back to the power up. My system did not automatically power back on. Do I need to go into the BIOS/UEFI and change the option so that on AC power restore that the system turns on? I am not sure but I am guessing that it is set to "revert to last state".
  18. I think I am going to need more idiot-proof instructions. I have only ever built a VM using the unRAID GUI. Where do I even enter the command(s) shown in post 1? In an ssh session in my unRAID server? What directory do you put the IMG file in?
  19. Ok thanks Bob. I will give that a try when I have a moment.
  20. I am trying to create a Chrome OS VM- where did you get the ISO for Chrome OS.
  21. Where did you get the ISO image?
  22. The ubuntu-xrdp docker - yes it works now. But I don't know what I did to make it work.
  23. Well you can use a hosts file to map IP addresses but you can't use it to map ports. And I find it easier to keep track of what IP addresses are currently in use as I use fing to manage my LAN. It can get a little tricky in unRAID to keep track of what ports have been used as the Show used ports (or whatever it is called) in the docker creation screen does not necessarily show all of the ports used.