• Content Count

  • Joined

  • Last visited

Everything posted by wirenut

  1. Add me to the list. After update no web ui access. Renamed rc file - restart - web ui times out removed session folder - restart - web ui times out remove above and force update - web ui time out. I have made no mods to rc file both original or new one. tried with Chrome and Firefox on local Lan from desktop and phone, same from external PC and phone via wireguard into unraid
  2. yes you can use both at the same time your situation may be what is addressed in the third post of this thread?? (sorry it wont let me post the info and link)
  3. the container name on the dashboard is blue indicating there is an update, but update option is not in drop down menu. Go to docker tab and sure enough update ready and apply update are there. run update and it completes. but still shows update ready. tried twice now same result. any idea??
  4. remotely connected using wireguard, from my phone upgraded 6.8.1 to 6.8.2 without error. thank you unraid team!
  5. If you already have your AirVPN working with binhex-deluge then you should just have to enable privoxy in deluge, then set your other stuff to run through the proxy it creates.
  6. Thank You. Booted right up. Can't wait to give WireGuard a try. Love the new login page also!
  7. I do. I originally installed it to copy data from old discs awhile back. Occasionally this and with the preclear plugin. Thats all. I will Thank you.
  8. preclear_finish_XXXXXXXX_2019-12-06.txt here you go.
  9. Thank you @binhex for this docker, and also @Frank1940 for the tutorial. Had a drive fail which I RMA'd back to Seagate so was opportunity to try this docker. Had an interesting final result. From the top of the final report: == invoked as: /usr/local/bin/preclear_binhex.sh -f -c 2 -M 4 /dev/sdd == ST10000VN0004-1ZD101 (removed) == Disk /dev/sdd has been successfully precleared == with a starting sector of 64 == Ran 2 cycles == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 17:34:23 (158 MB/s) == Last Cycl
  10. Woke up this morning to no power. At approx time of power loss the scheduled mover operation would have been running about 10 minutes. UPS did its thing as expected. Once power restored server booted up normally. Looking in shutdown log, I see a line where the mover operation was exited once the UPS shutdown started. Does the mover operation resume from where it left off if interrupted by an automatic, or manual for that matter, shutdown?
  11. swapped cables with another drive and am rebuilding with a new spare drive. Ill run a pre clear cycle on old drive once the rebuild is done to see if it fails or not. Thanks for assistance johnnie.black
  12. OK, shut down and checked connections. array came back up with zero read errors and disc is online but disabled. anything serious looking? start rebuild with spare drive?? new diags attached. tower-diagnostics-20191030-1605.zip
  13. copying a bunch of file this morning I started the mover and disc 1 disabled and went to error state. I know these thing happen from time to time, (glitchy cable, controller drop) and am ready to rebuild disc with spare. just want expert eyes to look and see if anything more serious jumps out in diags I have attached before i start the process. Thank You. tower-diagnostics-20191030-1506.zip
  14. After upgrading to 6.7.0 and getting used to for the last couple weeks, decided it was time to upgrade to some larger discs. Started by rebuilding parity with two new 10TB parity drives. Rebuild went OK as anticipated, once finished kicked off a parity check to verify. Got up this morning to check progress and it was going well but noticed speed seemed about half what was expected as the parity check had completed past the point of the slower existing data discs. Took a glance at the syslog and found it to be filled with this repeating: May 29 04:00:44 Tower nginx: 2019/05/29 04:00
  15. Curious on functionality... with the new option to "pause" a parity check, if paused and you reboot server, can you resume parity check or will it start a new one?
  16. Read through the last few posts prior to yours and you will be up and running again in no time.
  17. ok tried this and am in the same spot. same errors as earlier post, docker command fails in bridge, server wont start in host. any help showing me what i am doing incorrectly? in the meantime ill keep searching thread...
  18. I also upgraded to 6.7 and cannot start server Edit template to Bridge mode and docker command fails switch back to Host mode and docker starts, log into container to try and start server and it fails with: Error: service failed to start due to unresolved dependencies: set(['user']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=2]: ['iptables-restore v1.6.0: Bad IP address ""', '', 'Error occurred at line: 140', "Try `iptables-restore -h'
  19. OK. So for my piece of mind and understanding, nothing serious to worry about??
  20. Thanks Squid, any idea what "unclean shutdown" it is detecting??
  21. upgraded last night from 6.5.3. fix common problems notified me of an unclean shutdown but there was no automatic started parity check. Acknowledged and re-booted. same thing. Manually started a "no correct" parity check, completed this morning without errors. shutdown diagnostics from flash drive log folder attached. tower-diagnostics-20180920-2057.zip
  22. How do I enable the strict port forwarding option? I do not have this option available in the container settings page. Is it something that manually needs to be added?
  23. Finally changed out the marvel cards for a couple LSI 9211 8i SAS cards flashed to IT mode and all seems well, Also cut my parity check time by almost half of what they had been. This problem is solved. Thanks again for the help.
  24. curious on how the flash back up works so i created a flash back up, copied the backed up files to flash drive and booted unraid. it booted up seemingly ok, just all array drives were unassigned. If you then assign all drives to their proper place and start array, would it operate as expected?