• Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About captain_video

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Once the data rebuild was complete I shutdown the server and then restarted it. When it came back up everything was back to the way it should be so it's all good now. I think maybe I just switched screens too fast in the GUI and it just got stuck somehow. Been using UNRAID for about 12 years and this was the first time I had seen anything like this occur so I thought I'd share it with the rest of the class.
  2. I recently ran CCleaner the other day so there shouldn't be much in the cache to cause this issue. I think it just got hung up when I tried to switch screens too quickly. It says the array is stopped, but it is clearly up and running even though it thinks it isn't. I just cleared the cache and closed the web browser. I reopened Chrome and connected to the server. Still getting the same screen. I think it's just a Gremlin infestation.
  3. I have tried that and every other trick I can think of. I've closed my web browser and reconnected and still get the same screen. The weird part is that even though it says the array is stopped it is still allowing me access to my files. I can play videos and movies streamed from the server so it appears to be a glitch in UNRAID. I'm just going to let it finish the data rebuild and then reboot the array. Hopefully it will correct itself when it resets.
  4. UNRAID PRO version 6.9.2. I just installed a new replacement drive to upgrade to a larger one. I assigned the new drive to the old slot as always, but when I started the array to begin the data rebuild it updated the GUI to show that the data rebuild was in progress but the screen still shows the array as stopped and the only Array Operation options are REBOOT or SHUTDOWN. What's weird is that I'm still able to access the files from my computer even though it says the array is stopped. I've attached the system log. It indicates a Stale Configuration in the bottom left of the screen along
  5. Just ran the latest version of pre-clear on a new drive and it appears to have fixed the problems I was having earlier. The problem with it failing was fixed in a previous version, but this latest version is configured to only run the zeroing process once and not twice like it had been doing. I generally buy external drives and shuck them before running pre-clear, but someone had mentioned running pre-clear with the enclosure hooked up to a USB port before taking the drive out of the enclosure. I tried it and so far it's working great. It seems to be running as fast as when perf
  6. Any update on the preclear issue I was having? I've run it on two 8TB drives and it ran the zeroing function twice on both drives. Total time to run preclear on a single drive was close to 80 hours. That includes the pre-read, zeroing x 2, and post-read sequences. Reducing the zeroing sequence to just a single pass would reduce that time by about 24-25 hours. I just use these drives to replace failed drives or ones that are nearing end of life or just upgrading from a smaller drive to increase storage. I don't really even need to run preclear on these dives, but I do it to en
  7. PM sent with diagnostics file and preclear log
  8. I'm running pre-clear on a 2nd 8TB drive and it is also performing the zeroing process twice. Is this really necessary? Can we get the option to just do this once?
  9. The preclear completed with no issues so it would appear that whatever bug was causing my problem has been corrected. The pre- and post-read functions took about 15 hours each, but the zeroing phase took about 49 hours for an 8 TB drive. This phase was performed twice for whatever reason even though I had selected only one cycle for the pre-clear process.
  10. Been running preclear on one of my 8TB discs and it appears that has run the zeroing function twice along with the pre-read function. It is currently performing the post read segment so I assume it should be completed by tomorrow morning. Does the latest preclear plugin automatically zero the disk twice? I don't recall changing the option when I initiated preclear. I generally only run preclear once on any new disk since most of them are used to rebuild data on either a failed disk or one that is being replaced due to the age of the disk.
  11. Great! I have a couple of disks that I'm running the long generic tests on before attempting to run a pre-clear on them. They're 8TB drives so it could take about 45 hours or so to run pre-clear on one of them. I'll post my results when the first one completes.
  12. I got this exact same sequence of error messages when attempting to preclear a drive. I have a slot in my server rack that I use specifically for pre-clearling drives. I replaced the SATA cable and put the drive into a new slot in my server rack and the messages disappeared. The drive did finally fail during the zeroing process at around the 75% complete point. I pulled the drive and ran SeaTools on it and it failed the short drive self-test so it turned out the drive was bad. I think this is actually unrelated to the error log you posted and feel it was more of a cable or backplane issue
  13. UnRAID version 6.9.1 and 2021.01.03 version of preclear. I tried running preclear on a new disk and it craps out with the following errors in the log: Mar 15 15:28:51 preclear_disk_Z84013MY_11172: /usr/local/emhttp/plugins/preclear.disk/script/ line 475: /tmp/.preclear/sdaf/dd_output_complete: No such file or directory Mar 15 15:28:51 preclear_disk_Z84013MY_11172: /usr/local/emhttp/plugins/preclear.disk/script/ line 475: [: -gt: unary operator expected Mar 15 15:28:51 preclear_disk_Z84013MY_11172: /usr/local/emhttp/plugins/preclea
  14. I'm sure the drive I had was an exception to the rule. I would like to think that these drives are decent enough to use in a PC or a server. Mine just seems to be an aberration. The weird thing is that it seemed to have no problems working on a Windows PC. my UNRAID server just seemed to take great exception to it for some reason. I've probably used about five dozen different drives in my unRAID server over the years and this is the first time I have ever experienced anything like this (I've got over 40 hard drives sitting on a shelf that were pulled from the server over the years due to
  15. I've been gradually upgrading all of the 3TB and 4TB drives in my server with 8TB drives as they get a lot of age on them. I have about a 60-40 mix of Seagate to WD drives with a few HGST drives mixed in. I've never had this issue with any drive before, regardless of manufacture or capacity. I used to have a lot of 1.5TB Samsung drives and never had a single failure using them. I believe Seagate bought them out years ago so they're pretty much defunct now. I also had a handful of Toshiba 3TB drives, but they have since been upgraded with 8TB drives. The server is currently at 176 TB capa