Jump to content

wirenut

Members
  • Content Count

    166
  • Joined

  • Last visited

Community Reputation

6 Neutral

About wirenut

  • Rank
    Advanced Member
  • Birthday February 14

Converted

  • Gender
    Male
  • Location
    United States

Recent Profile Visitors

884 profile views
  1. yes you can use both at the same time your situation may be what is addressed in the third post of this thread?? (sorry it wont let me post the info and link)
  2. the container name on the dashboard is blue indicating there is an update, but update option is not in drop down menu. Go to docker tab and sure enough update ready and apply update are there. run update and it completes. but still shows update ready. tried twice now same result. any idea??
  3. remotely connected using wireguard, from my phone upgraded 6.8.1 to 6.8.2 without error. thank you unraid team!
  4. If you already have your AirVPN working with binhex-deluge then you should just have to enable privoxy in deluge, then set your other stuff to run through the proxy it creates.
  5. Thank You. Booted right up. Can't wait to give WireGuard a try. Love the new login page also!
  6. I do. I originally installed it to copy data from old discs awhile back. Occasionally this and with the preclear plugin. Thats all. I will Thank you.
  7. preclear_finish_XXXXXXXX_2019-12-06.txt here you go.
  8. Thank you @binhex for this docker, and also @Frank1940 for the tutorial. Had a drive fail which I RMA'd back to Seagate so was opportunity to try this docker. Had an interesting final result. From the top of the final report: == invoked as: /usr/local/bin/preclear_binhex.sh -f -c 2 -M 4 /dev/sdd == ST10000VN0004-1ZD101 (removed) == Disk /dev/sdd has been successfully precleared == with a starting sector of 64 == Ran 2 cycles == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 17:34:23 (158 MB/s) == Last Cycle's Zeroing time : 0:00:31 (322607 MB/s) == Last Cycle's Post Read Time : 17:56:33 (154 MB/s) == Last Cycle's Total Time : 17:58:10 == == Total Elapsed Time 68:43:17 == == Disk Start Temperature: 32C == == Current Disk Temperature: 33C, == ============================================================================ I received 4 email notifications within one minute: 1. Disk /dev/sdd has successfully finished a preclear cycle 2. Zeroing Disk /dev/sdd Started. Disk Temperature: 33C, 3. Zeroing Disk /dev/sdd in progress: 99% complete. ( of 10,000,831,348,736 bytes Wrote ) Disk Temperature: 33C, Next report at 50% Calculated Write Speed: 526359 MB/s Elapsed Time of current cycle: 0:00:19 Total Elapsed time: 50:45:28 4. Zeroing Disk /dev/sdd Done. Zeroing Elapsed Time: 0:00:31 Total Elapsed Time: 50:45:40 Disk Temperature: 33C, Calculated Write Speed: 322607 MB/s Checking with preclear_binhex.sh -t /dev/sdd i got this: Model Family: Seagate IronWolf Device Model: ST10000VN0004-1ZD101 Serial Number: removed LU WWN Device Id: 5 000c50 0a51f22c3 Firmware Version: SC61 User Capacity: 10,000,831,348,736 bytes [10.0 TB] Disk /dev/sdd: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Disk model: ST10000VN0004-1Z Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdd1 64 4294967358 4294967295 2T 0 Empty ######################################################################## failed test 6 ========================================================================1.19 == == Disk /dev/sdd is NOT precleared == 64 4294967295 19532873664 ============================================================================ Got any thoughts other than trying again?
  9. Woke up this morning to no power. At approx time of power loss the scheduled mover operation would have been running about 10 minutes. UPS did its thing as expected. Once power restored server booted up normally. Looking in shutdown log, I see a line where the mover operation was exited once the UPS shutdown started. Does the mover operation resume from where it left off if interrupted by an automatic, or manual for that matter, shutdown?
  10. swapped cables with another drive and am rebuilding with a new spare drive. Ill run a pre clear cycle on old drive once the rebuild is done to see if it fails or not. Thanks for assistance johnnie.black
  11. OK, shut down and checked connections. array came back up with zero read errors and disc is online but disabled. anything serious looking? start rebuild with spare drive?? new diags attached. tower-diagnostics-20191030-1605.zip
  12. copying a bunch of file this morning I started the mover and disc 1 disabled and went to error state. I know these thing happen from time to time, (glitchy cable, controller drop) and am ready to rebuild disc with spare. just want expert eyes to look and see if anything more serious jumps out in diags I have attached before i start the process. Thank You. tower-diagnostics-20191030-1506.zip
  13. After upgrading to 6.7.0 and getting used to for the last couple weeks, decided it was time to upgrade to some larger discs. Started by rebuilding parity with two new 10TB parity drives. Rebuild went OK as anticipated, once finished kicked off a parity check to verify. Got up this morning to check progress and it was going well but noticed speed seemed about half what was expected as the parity check had completed past the point of the slower existing data discs. Took a glance at the syslog and found it to be filled with this repeating: May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [crit] 6636#6636: ngx_slab_alloc() failed: no memory May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [error] 6636#6636: shpool alloc failed May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [error] 6636#6636: nchan: Out of shared memory while allocating message of size 10567. Increase nchan_max_reserved_memory. May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [error] 6636#6636: *100136 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=2 HTTP/1.1", host: "localhost" May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [error] 6636#6636: MEMSTORE:00: can't create shared message for channel /disks May 29 04:00:45 Tower nginx: 2019/05/29 04:00:45 [crit] 6636#6636: ngx_slab_alloc() failed: no memory May 29 04:00:45 Tower nginx: 2019/05/29 04:00:45 [error] 6636#6636: shpool alloc failed I let it run awhile to see if errors stop and speed improves, even tried new pause feature, errors seem to have stopped but speed remained the same. Not sure what is going on or what it means. only thing ive found from some time ago was tied to safari browser but i am using chrome and/or firefox. Is it connected to disc speed not being what is expected during parity check verify or some other issue to be concerned with?? tower-diagnostics-20190530-1829.zip
  14. Curious on functionality... with the new option to "pause" a parity check, if paused and you reboot server, can you resume parity check or will it start a new one?
  15. Read through the last few posts prior to yours and you will be up and running again in no time.