JohanSF

Members
  • Posts

    122
  • Joined

  • Last visited

Everything posted by JohanSF

  1. Press edit and find show advanced, then you see the field.
  2. Use this repo until it is fixed: Repository: binhex/arch-rtorrentvpn:0.9.7-1-11 It is the version before it broke. Like this: Then when it is fixed, you can make it: Repository: binhex/arch-rtorrentvpn:latest
  3. I just saw there's and update and updated, now it my binhex rutorrent container wont start.
  4. Turns out that it works now that the the parity check ended!
  5. Will do that and report back when the parity check is done tomorrow.
  6. That is good news, do you have any idea what do to about them? do I have to restart?
  7. Can I restart the plugin? I don't want to restart the whole server now that the parity check is running.
  8. Got home to this log: Restarted the machine with hardware button. Here are the diagnostics before starting the array: hal9000-diagnostics-20181109-1539.zip It started, parity check runs and dockers started too. I am looking at this now: Should I start the Troubleshooting Mode in "Fix Common Problems"? Edit: Not sure I can do that though, the "Scanning" when I enter the page seems to stay there forever. This is in the log:
  9. I can click Download diagnostics but it is collecting diagnosis information forever and the download never happens. Trying with the terminal method I get "Starting diagnostics collection..." and nothing happens. Update: I cannot restart it remotely it seems, have to do a hard reset when I get home. I really hope the cache drive is not corrupted
  10. Ok. It is unresponsive in the way that on the main page, everything on the page under the disk status boxes is now missing. Using my phone with teamviewer to see it. I can also see that the log has red erros. I can post that when I get home.
  11. I just updated to 6.6.5 and started the array. Next to the Array status on the main page it now says "BTRFS operation is running". Now it is unresponsive.. should I hard-restart the machine? This is becoming a little scary.
  12. I celebrated too early. The whole unRaid server crashed again now during the night. It must have been before 3:40 am as the mover has not run. Here is the syslog and diagnostics: syslog.txt (I know that Ihal9000-diagnostics-20181109-0622.zipserver to watch something on plex up until about 11 pm) hal9000-diagnostics-20181109-0622.zip It should also not be caused by my Ryzen 1700 processor as I have the zenstates script applied to disable C6 states:
  13. I cannot thank you enough, it is good to have a stable system again.
  14. hal9000-diagnostics-20181108-1833.zipI got this: I stopped the parity check and tried this: All good now? - if yes, that was an easy fix, can you explain more about what is going on and how you diagnosed it? New diagnostics: hal9000-diagnostics-20181108-1833.zip
  15. Alright I don't really know what I am doing but you ask me to do btrfs balance start -dusage=75 /mnt/cache in the console right?
  16. This is a continuation of: and with diagnostics as per Squids' instructions, however, I did have to reboot in order to start the docker service again. hal9000-diagnostics-20181108-1715.zip In response to I do have my appdata on the cache drive. I also think it did move last night, the 260 GB here makes sense as I downloaded large content just after the first crash. But it does indeed seem to have something to do with the cache drive and/or a container.
  17. This is a continuation of: I just arrived home and it turns out the server was running but the docker service had crashed: I shot this picture of the syslog: Squid, I run that command now so I can present a full syslog next time. Might this be a docker container that is running amok?
  18. Installed last night and woke up to a crashed server with this on the screen. I had `tail -f /var/log/syslog` going. I also screenshotted this in Netdata earlier today, don't know if it means anything. Fastforward some hours to now; I am at work and can see on my phone that it has crashed again :( What should I do when I get home?
  19. Who are we waiting for here? Nvidia? or is it Lime-Technology? Or Plex?
  20. I found out what broke. I do this Extra Parameter in order to map the calibredb file so that LazyLibrarian can connect to it, and found that it works with EDGE=0, but not with EDGE=1. -v '/mnt/cache/appdata/Automation-LazyLibrarianOpt/calibredb':'/opt/calibre/calibredb':'rw'
  21. I had a fully working RDP-Calibre going and wanted to update so changed to EDGE=1, then I got this: Setting it back to EDGE=0 gives me a black screen with the mouse cursor and I can do ctrl+shift+alt but nothing else. Strangely, if I visit the reverse proxy I have of RDP-Calibre I see this on the black screen: