wirenut

Members
  • Posts

    210
  • Joined

  • Last visited

Everything posted by wirenut

  1. Thank you @binhex for this docker, and also @Frank1940 for the tutorial. Had a drive fail which I RMA'd back to Seagate so was opportunity to try this docker. Had an interesting final result. From the top of the final report: == invoked as: /usr/local/bin/preclear_binhex.sh -f -c 2 -M 4 /dev/sdd == ST10000VN0004-1ZD101 (removed) == Disk /dev/sdd has been successfully precleared == with a starting sector of 64 == Ran 2 cycles == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 17:34:23 (158 MB/s) == Last Cycle's Zeroing time : 0:00:31 (322607 MB/s) == Last Cycle's Post Read Time : 17:56:33 (154 MB/s) == Last Cycle's Total Time : 17:58:10 == == Total Elapsed Time 68:43:17 == == Disk Start Temperature: 32C == == Current Disk Temperature: 33C, == ============================================================================ I received 4 email notifications within one minute: 1. Disk /dev/sdd has successfully finished a preclear cycle 2. Zeroing Disk /dev/sdd Started. Disk Temperature: 33C, 3. Zeroing Disk /dev/sdd in progress: 99% complete. ( of 10,000,831,348,736 bytes Wrote ) Disk Temperature: 33C, Next report at 50% Calculated Write Speed: 526359 MB/s Elapsed Time of current cycle: 0:00:19 Total Elapsed time: 50:45:28 4. Zeroing Disk /dev/sdd Done. Zeroing Elapsed Time: 0:00:31 Total Elapsed Time: 50:45:40 Disk Temperature: 33C, Calculated Write Speed: 322607 MB/s Checking with preclear_binhex.sh -t /dev/sdd i got this: Model Family: Seagate IronWolf Device Model: ST10000VN0004-1ZD101 Serial Number: removed LU WWN Device Id: 5 000c50 0a51f22c3 Firmware Version: SC61 User Capacity: 10,000,831,348,736 bytes [10.0 TB] Disk /dev/sdd: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Disk model: ST10000VN0004-1Z Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdd1 64 4294967358 4294967295 2T 0 Empty ######################################################################## failed test 6 ========================================================================1.19 == == Disk /dev/sdd is NOT precleared == 64 4294967295 19532873664 ============================================================================ Got any thoughts other than trying again?
  2. Woke up this morning to no power. At approx time of power loss the scheduled mover operation would have been running about 10 minutes. UPS did its thing as expected. Once power restored server booted up normally. Looking in shutdown log, I see a line where the mover operation was exited once the UPS shutdown started. Does the mover operation resume from where it left off if interrupted by an automatic, or manual for that matter, shutdown?
  3. swapped cables with another drive and am rebuilding with a new spare drive. Ill run a pre clear cycle on old drive once the rebuild is done to see if it fails or not. Thanks for assistance johnnie.black
  4. OK, shut down and checked connections. array came back up with zero read errors and disc is online but disabled. anything serious looking? start rebuild with spare drive?? new diags attached. tower-diagnostics-20191030-1605.zip
  5. copying a bunch of file this morning I started the mover and disc 1 disabled and went to error state. I know these thing happen from time to time, (glitchy cable, controller drop) and am ready to rebuild disc with spare. just want expert eyes to look and see if anything more serious jumps out in diags I have attached before i start the process. Thank You. tower-diagnostics-20191030-1506.zip
  6. After upgrading to 6.7.0 and getting used to for the last couple weeks, decided it was time to upgrade to some larger discs. Started by rebuilding parity with two new 10TB parity drives. Rebuild went OK as anticipated, once finished kicked off a parity check to verify. Got up this morning to check progress and it was going well but noticed speed seemed about half what was expected as the parity check had completed past the point of the slower existing data discs. Took a glance at the syslog and found it to be filled with this repeating: May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [crit] 6636#6636: ngx_slab_alloc() failed: no memory May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [error] 6636#6636: shpool alloc failed May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [error] 6636#6636: nchan: Out of shared memory while allocating message of size 10567. Increase nchan_max_reserved_memory. May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [error] 6636#6636: *100136 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=2 HTTP/1.1", host: "localhost" May 29 04:00:44 Tower nginx: 2019/05/29 04:00:44 [error] 6636#6636: MEMSTORE:00: can't create shared message for channel /disks May 29 04:00:45 Tower nginx: 2019/05/29 04:00:45 [crit] 6636#6636: ngx_slab_alloc() failed: no memory May 29 04:00:45 Tower nginx: 2019/05/29 04:00:45 [error] 6636#6636: shpool alloc failed I let it run awhile to see if errors stop and speed improves, even tried new pause feature, errors seem to have stopped but speed remained the same. Not sure what is going on or what it means. only thing ive found from some time ago was tied to safari browser but i am using chrome and/or firefox. Is it connected to disc speed not being what is expected during parity check verify or some other issue to be concerned with?? tower-diagnostics-20190530-1829.zip
  7. Curious on functionality... with the new option to "pause" a parity check, if paused and you reboot server, can you resume parity check or will it start a new one?
  8. Read through the last few posts prior to yours and you will be up and running again in no time.
  9. ok tried this and am in the same spot. same errors as earlier post, docker command fails in bridge, server wont start in host. any help showing me what i am doing incorrectly? in the meantime ill keep searching thread...
  10. I also upgraded to 6.7 and cannot start server Edit template to Bridge mode and docker command fails switch back to Host mode and docker starts, log into container to try and start server and it fails with: Error: service failed to start due to unresolved dependencies: set(['user']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=2]: ['iptables-restore v1.6.0: Bad IP address ""', '', 'Error occurred at line: 140', "Try `iptables-restore -h' or 'iptables-restore --help' for more information."]: internet/defer:653,sagent/ipts:133,sagent/ipts:50,util/daemon:28,util/daemon:69,application/app:384,scripts/_twistd_unix:258,application/app:396,application/app:311,internet/base:1243,internet/base:1255,internet/epollreactor:235,python/log:103,python/log:86,python/context:122,python/context:85,internet/posixbase:627,internet/posixbase:252,internet/abstract:313,internet/process:312,internet/process:973,internet/process:985,internet/process:350,internet/_baseprocess:52,internet/process:987,internet/_baseprocess:64,svc/pp:142,svc/svcnotify:32,internet/defer:459,internet/defer:567,internet/defer:653,sagent/ipts:133,sagent/ipts:50,util/error:66,util/error:47 service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_live', 'iptables_openvpn'])
  11. OK. So for my piece of mind and understanding, nothing serious to worry about??
  12. Thanks Squid, any idea what "unclean shutdown" it is detecting??
  13. upgraded last night from 6.5.3. fix common problems notified me of an unclean shutdown but there was no automatic started parity check. Acknowledged and re-booted. same thing. Manually started a "no correct" parity check, completed this morning without errors. shutdown diagnostics from flash drive log folder attached. tower-diagnostics-20180920-2057.zip
  14. How do I enable the strict port forwarding option? I do not have this option available in the container settings page. Is it something that manually needs to be added?
  15. Finally changed out the marvel cards for a couple LSI 9211 8i SAS cards flashed to IT mode and all seems well, Also cut my parity check time by almost half of what they had been. This problem is solved. Thanks again for the help.
  16. curious on how the flash back up works so i created a flash back up, copied the backed up files to flash drive and booted unraid. it booted up seemingly ok, just all array drives were unassigned. If you then assign all drives to their proper place and start array, would it operate as expected?
  17. Now home from work with better time to research I think the rebuild with spare disk is best option for my current situation as it appears the controller is the point of mistrust and my backups are not where they should be. If u have any links to steer me in the direction of the LSI controllers that would work with my current board pci 2.0 slots I'd be greatful. Thanks for the help and advice.
  18. that took awhile longer then i remember. smart test passed without error, attached report along with new diagnostics. new config the way to go then? I've never tried this, ive gathered from what ive read that just reassign all discs to their original assignments, confirm parity is valid and start array? then do a parity check. As changing the controllers out are not immediately an option I suppose the same thing could happen and am aware of that. tower-smart-20180501-1520.zip tower-diagnostics-20180501-1524.zip
  19. Short smart test complete without errors. Running long test now. when done I will post with new diagnostics. I have one spare so replace and rebuild is option if needed. no data changed after errors started. thanks.
  20. Thank you for the help. OK. I'll look into replacement of the controller. Been lucky to date i guess as this hasn't happened before. After checking I booted server and it came back on with notification array turned good. Array has 0 disks with read errors. However disk 1 is disabled. Do i replace it? rebuild it or something else?
  21. Upgraded a few days back o 6.5.1. update went fine server working correctly. Expanded arry 2 days ago with additional 4 tb drive, no issues detected. monthly parity check started last night, this morning i see email notifications of one disk disabled and 8 with read errors. parity check still running but sync errors just keep increasing, 391748176 and counting. Something very strange and wrong here. Should i stop the parity check at this point? What steps can i take to troubleshoot? tower-diagnostics-20180501-0552.zip
  22. "Fixed bug where the server TLD is not formed correctly in the self-signed SSL cert. After installing this release delete the self-signed cert config/ssl/certs/<server-name>_unraid_bundle.pem and then reboot to let unRAID OS regenerate a new one." I've never used self signed certs as far as I know. Following the release notes I performed the instructions above. the file <server-name>_unraid_bundle.pem did exist but after reboot nothing new was created. Only certificate_bundle.pem remains. Just want to be sure this is expected behavior. Thanks.
  23. Does the web UI not work for this docker?
  24. Yes. Worked for me also. Thank you! Sent from my HTC One M9 using Tapatalk