cisellis

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by cisellis

  1. Okay, I'm bringing this topic back with an update. I did lots of testing and never figured out what was going on here, plugging things into different ports and replacing switches, etc. The Orbi issue of the hardwired ports dying with no warning got so bad (daily) that I wrote it off as the hardware failing and generating some kind of incompatibility with something on my network. I sold them to someone for cheap and bought a Ubiquiti Dream Machine and 3 access points lites to replace everything. I have the 3 APs hardwired into the UDM and the 4th port has an Netgear unmanaged 8-port switch. The unraid server is plugged into that Netgear switch. along with a few other small things (a raspberry pi, etc). I segmented a bunch of stuff into VLANs for IoT and generally things are way more reliable. HOWEVER, I am still occasionally having this issue where it seems like the Unraid server is killing the network and I have no idea why. Example, I noticed the network seemed slower yesterday at some point, but it still worked and I was busy with the kids, etc. By 10 PM last night I get "the wifi isn't working at all" from my wife and alexa is complaining "no network" etc. I reboot the UDM, no change. wifi is up and my phone is connected, but traffic just isn't working at all. I go downstairs, power off the unraid server manually. The wifi instantly starts flowing traffic at full speed and suddenly my light switches and alexa respond, etc. Any tips on what to do here? I can run a trace and know enough networking to be dangerous but I'm not a pro. I have a UDM that I can go through logs, etc. I'm trying to figure out how to troubleshoot what is happening in these situations where it seems like something in the unraid server is DOSing the rest of the network at random intervals (monthly?) I'm wondering if I need to do some packet captures or something when this is happening, but I'm not sure what I'd look for or what tools to use. Suggestions welcome!
  2. Howdy all, this is more of a general networking issue, but it's specific to my Unraid server and I would love any thoughts you have on how I could do further troubleshooting. I have been chasing network failures off and on for a couple of months now after moving to a larger house. Before that I had no issues at all. The issue is that occasionally all hardwired ports on the RBK50 Orbi router fail and will no longer respond at all, but wireless internet continues to work fine. When that happens, you can't connect to the RBK50 or the internet from anything hardwired. Satellite RBS20s still work fine if they are connected wireless and the router still responds if you're using the Orbi phone app from your phone. While troubleshooting, I think I tracked the problem to my Unraid server. It appears that SOMETIMES when my 'Anniversary' build Unraid server (which runs a GA-7PESH2 motherboard) is plugged into the RBK50 at all (even into a Netgear switch that is plugged into the RBK50) it kills all the hardwired ports on the RBK50. It doesn't happen all the time and once it has happened, the only way to fix it seems to be unplugging the Unraid server from the network and fully power cycling the RBK50. If you plug the Unraid server in again immediately, it often kills the RBK50 wired ports again. If you wait a while and plug it back in, it seems to often go back to working for days or weeks. I can't find any logs on Unraid or the RBK50 that point to any issues. I am out of ideas besides "replace the RBK50". Any thoughts on what I could be missing here?
  3. Alrighty. Had to take care of work, dinner, etc. The initial problem does seem to be having my cache drive as xfs. When I did the upgrade Unraid lost access to the cache drive. Somewhere along the way it also corrupted my docker.img. I moved everything off cache, unmounted it, formatted it to BTRFS, remounted it, and did the upgrade again. After the upgrade, it kicked the cache out as unassigned, but it let me format it and add it right back to the array with no issues. Also, it let me format it as xfs by default. Docker had the same issue as before, docker failed to start. I looked into that and this time I got a bunch of btrfs errors about failed reads around docker. Reading up on that, it seems like my docker.img was corrupted when the upgrade rendered the cache drive wonky. I deleted the docker.img and restarted the service. It loaded fine and I'm re-adding my containers from templates now. Looks to be salvaged. Probably time for some docker cleanup anyway
  4. Will do. Thanks. Also, I forgot that you have to set cache to 'yes'...THEN run the mover. if you just set it to no, it leaves the files stranded out there.
  5. Okay, just figured this out I think. For anyone else that ends up here...I saw another topic here: And they mention the disk.config being renamed during the 6.9 install as such: This is because to support multiple pools, code detects the upgrade to 6.9.0 and moves the 'cache' device settings out of 'config/disk.cfg' and into 'config/pools/cache.cfg'. If you downgrade back to 6.8.3 these settings need to be restored. Since I had downgraded to 6.8.3, I put the usb in another computer, restored the config/disk.cfg file from .bak and rebooted Unraid. All my shares are back and I'm on 6.8.3. Not getting any errors any more. I'm going to set everything on cache to 'no', run the mover, format the cache to btrfs, make sure I run the backup, and try the upgrade to 6.9 again.
  6. So far I can't figure out how to get to any diags since nothing will load. I think I'm able to dump the usb and the cache drive to disk taking them out of the server. Also, the array started last time, there are just zero disks and the only buttons are reboot and one other.
  7. Okay, so I am trying to figure out if my situation is salvageable. From what I can tell from this thread: I upgraded to 6.9 and my cache drive became unavailable. My appdata was on there and I was using xfs on the cache drive because brtfs was unstable previously and I had better luck with xfs. oops. The error was that docker would no longer start and I couldn't figure out why. I rebooted, no change. Stopped and started the service. No change. I downgraded by to 6.83 and got errors about not being able to access docker.img. All my shares were still missing and the main page is blank. I did a restore using the plugin. The restore finished and now I'm getting tons of errors that I am out of space or plugins not loading, etc. Sample from the plugins page: Warning: file_put_contents(): Only 0 of 3886 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix.plugin.manager/include/ShowPlugins.phpon line 136 Uninstalling plugins doesn't work, etc. I can't format the drives because they won't load at all. I can't connect to the server to pull the syslog, etc. It feels like I'm borked because my cache drive was the wrong fs and my only nuclear option is to completely reinstall Unraid and start over. Thoughts on whether I have any other options and whether my data is salvageable or not? Thanks in advance for any help.
  8. Just finished. No issues. Gonna rerun the rest!
  9. I'm rerunning the same drive from scratch and so far it looks like the issue is resolved. It made it through the clear and the zero and is almost done with the post-read. Thanks!
  10. Awesome! I can rerun it when you push an update.
  11. Update on current status. I saw two similar issues previously and wasn't sure if they were the same. Some of the drives seemed to restart over and over, which is the log posted above. Others seemed to get "stuck" at 99% and I didn't pull the logs to find out if they were showing the same thing. Fast forward to now, the 3.0 TB drive I started a new preclear on (erase and preclear, 1 cycle, no pre-read) has been running for about 24 hours. It went through 25%, 50%, and 75% and sent email/UI status correctly. It looks like it hit 99% status and stopped earlier this AM. The UI under unassigned drives still says 99%, but one more email/UI update fired around 5 AM saying "Erasing started on ..." with no percentage, so it almost feels like it started over. I'm not seeing the same results in the logs yet, they just seem to have stopped: ErrorWarningSystemArrayLogin DOWNLOAD Feb 13 07:17:27 preclear_disk_PK1234P8JHP5PX_17218: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --erase-clear --notify 3 --frequency 4 --cycles 1 --skip-preread --no-prompt /dev/sdk Feb 13 07:17:27 preclear_disk_PK1234P8JHP5PX_17218: Preclear Disk Version: 1.0.9 Feb 13 07:17:29 preclear_disk_PK1234P8JHP5PX_17218: S.M.A.R.T. info type: default Feb 13 07:17:29 preclear_disk_PK1234P8JHP5PX_17218: S.M.A.R.T. attrs type: default Feb 13 07:17:29 preclear_disk_PK1234P8JHP5PX_17218: Disk size: 3000592982016 Feb 13 07:17:29 preclear_disk_PK1234P8JHP5PX_17218: Disk blocks: 732566646 Feb 13 07:17:29 preclear_disk_PK1234P8JHP5PX_17218: Blocks (512 byte): 5860533168 Feb 13 07:17:29 preclear_disk_PK1234P8JHP5PX_17218: Block size: 4096 Feb 13 07:17:29 preclear_disk_PK1234P8JHP5PX_17218: Start sector: 0 Feb 13 07:17:35 preclear_disk_PK1234P8JHP5PX_17218: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo Feb 13 07:17:35 preclear_disk_PK1234P8JHP5PX_17218: Erasing: emptying the MBR. Feb 13 07:17:35 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=3000590884864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock Feb 13 07:17:35 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd pid [20246] Feb 13 09:22:08 preclear_disk_PK1234P8JHP5PX_17218: Erasing: progress - 10% erased Feb 13 11:31:10 preclear_disk_PK1234P8JHP5PX_17218: Erasing: progress - 20% erased Feb 13 13:38:25 preclear_disk_PK1234P8JHP5PX_17218: Erasing: progress - 30% erased Feb 13 15:47:32 preclear_disk_PK1234P8JHP5PX_17218: Erasing: progress - 40% erased Feb 13 17:56:33 preclear_disk_PK1234P8JHP5PX_17218: Erasing: progress - 50% erased Feb 13 20:05:42 preclear_disk_PK1234P8JHP5PX_17218: Erasing: progress - 60% erased Feb 13 22:14:59 preclear_disk_PK1234P8JHP5PX_17218: Erasing: progress - 70% erased Feb 14 00:24:15 preclear_disk_PK1234P8JHP5PX_17218: Erasing: progress - 80% erased Feb 14 02:33:20 preclear_disk_PK1234P8JHP5PX_17218: Erasing: progress - 90% erased Feb 14 04:45:20 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1429191+0 records out Feb 14 04:45:20 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 2997230764032 bytes (3.0 TB, 2.7 TiB) copied, 76994.5 s, 38.9 MB/s Feb 14 04:45:20 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1429464+0 records in Feb 14 04:45:20 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1429464+0 records out Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 2997803286528 bytes (3.0 TB, 2.7 TiB) copied, 77009 s, 38.9 MB/s Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1429723+0 records in Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1429723+0 records out Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 2998346448896 bytes (3.0 TB, 2.7 TiB) copied, 77023.5 s, 38.9 MB/s Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1429999+0 records in Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1429999+0 records out Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 2998925262848 bytes (3.0 TB, 2.7 TiB) copied, 77038.1 s, 38.9 MB/s Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1430260+0 records in Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1430260+0 records out Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 2999472619520 bytes (3.0 TB, 2.7 TiB) copied, 77052.6 s, 38.9 MB/s Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1430518+0 records in Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1430518+0 records out Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 3000013684736 bytes (3.0 TB, 2.7 TiB) copied, 77067.1 s, 38.9 MB/s Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1430791+0 records in Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 1430791+0 records out Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd output: 3000586207232 bytes (3.0 TB, 2.7 TiB) copied, 77081.6 s, 38.9 MB/s Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: dd process hung at 3000588304384, killing.... Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Continuing disk write on byte 3000586207232 Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=3000586207232 count=6774784 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock Feb 14 04:45:21 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd pid [20853] Feb 14 04:47:11 preclear_disk_PK1234P8JHP5PX_17218: Erasing: dd - wrote 3000592982016 of 3000592982016.
  12. Most of my drives are all the same, HGST Deskstar 3.0 TB drives. It had no problem with the smaller laptop drives, but had this behavior on all of the larger ones, so it seems like it might be related to the size of the drive. I'm rerunning one of them with the updated plugin and I'll see what happens.
  13. I think that is where I'm at. I'm only running one drive this time and I'm getting the same behavior (from the log above). It's at about 70 hours and has restarted at least 2-3 times from what I can tell. I haven't tried stopping and restarting it yet. I will probably wait and see if I can find another solution first. I saw something about a docker container for preclear and I might try that instead if I can't figure out what is going on.
  14. I'm having the same issue where drives keep "erasing" over and over again or getting stuck at 99%. I had two drives that were "erasing" for 200+ hours before I killed the process. I started them over and they both both hung at 99%. I started one of the drives by itself and it keeps getting to the end of the erase and restarting. I'll post the logs here and a few notes: These are drives connected with a usb dock and I was able to clear 8-10 other drives before this. All of the ones failing are seagate or hitachi 3-4TB drives. I have the unassigned drives plugin, but they aren't mounted. I'm running them as "erase and clear" with the gfjardim script, 1 cycle, no pre-read and this is a sample of what I'm seeing: Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --erase-clear --notify 3 --frequency 1 --cycles 1 --skip-preread --skip-postread --no-prompt /dev/sdk Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Preclear Disk Version: 1.0.9 Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: S.M.A.R.T. info type: default Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: S.M.A.R.T. attrs type: default Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Disk size: 4000787030016 Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Disk blocks: 976754646 Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Blocks (512 byte): 7814037168 Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Block size: 4096 Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Start sector: 0 Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR. Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [3330] Feb 03 11:32:44 preclear_disk_ZFN1PJC3_650: Erasing: progress - 10% erased Feb 03 14:20:23 preclear_disk_ZFN1PJC3_650: Erasing: progress - 20% erased Feb 03 17:09:09 preclear_disk_ZFN1PJC3_650: Erasing: progress - 30% erased Feb 03 19:58:05 preclear_disk_ZFN1PJC3_650: Erasing: progress - 40% erased Feb 03 22:47:18 preclear_disk_ZFN1PJC3_650: Erasing: progress - 50% erased Feb 04 01:36:10 preclear_disk_ZFN1PJC3_650: Erasing: progress - 60% erased Feb 04 04:25:08 preclear_disk_ZFN1PJC3_650: Erasing: progress - 70% erased Feb 04 07:14:12 preclear_disk_ZFN1PJC3_650: Erasing: progress - 80% erased Feb 04 10:03:23 preclear_disk_ZFN1PJC3_650: Erasing: progress - 90% erased Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906354+0 records out Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3997914103808 bytes (4.0 TB, 3.6 TiB) copied, 100927 s, 39.6 MB/s Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906593+0 records in Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906593+0 records out Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3998415323136 bytes (4.0 TB, 3.6 TiB) copied, 100939 s, 39.6 MB/s Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906807+0 records in Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906807+0 records out Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3998864113664 bytes (4.0 TB, 3.6 TiB) copied, 100951 s, 39.6 MB/s Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907051+0 records in Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907051+0 records out Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3999375818752 bytes (4.0 TB, 3.6 TiB) copied, 100963 s, 39.6 MB/s Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907265+0 records in Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907265+0 records out Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3999824609280 bytes (4.0 TB, 3.6 TiB) copied, 100975 s, 39.6 MB/s Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907508+0 records in Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907508+0 records out Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 4000334217216 bytes (4.0 TB, 3.6 TiB) copied, 100987 s, 39.6 MB/s Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907715+0 records in Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907715+0 records out Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 4000768327680 bytes (4.0 TB, 3.6 TiB) copied, 100999 s, 39.6 MB/s Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: dd process hung at 4000770424832, killing.... Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Continuing disk write on byte 4000768327680 Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=4000768327680 count=18702336 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [8748] Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: dd process hung at 0, killing.... Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR. Feb 04 12:57:00 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock Feb 04 12:57:00 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [31679] Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535414 s, 0.0 kB/s Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535427 s, 0.0 kB/s Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535439 s, 0.0 kB/s Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535451 s, 0.0 kB/s Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535464 s, 0.0 kB/s Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535476 s, 0.0 kB/s Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 48: 31679 Killed $dd_cmd 2> $dd_output Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: dd process hung at 2097152, killing.... Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR. Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [10243] Feb 04 15:45:32 preclear_disk_ZFN1PJC3_650: Erasing: progress - 10% erased Feb 04 18:37:31 preclear_disk_ZFN1PJC3_650: Erasing: progress - 20% erased Feb 04 21:29:19 preclear_disk_ZFN1PJC3_650: Erasing: progress - 30% erased Feb 05 00:21:28 preclear_disk_ZFN1PJC3_650: Erasing: progress - 40% erased Feb 05 03:13:48 preclear_disk_ZFN1PJC3_650: Erasing: progress - 50% erased Any ideas what is going on?