JarDo

Members
  • Posts

    377
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

JarDo's Achievements

Contributor

Contributor (5/14)

1

Reputation

  1. I am on v6.8.3. How do I spin down unassigned devices. I seem to remember I could click on the green orb and that would do it, but the orbs on my unassigned devices are already grey but my unassigned drives are showing a temp. I have to assume if the drives are showing a temp that they are not spun down.
  2. I've precleared several drives with this plugin in the past with no issue. Lately I've been trying to preclear 3 new Hitachi HDS5C4040ALE630 4TB drives. On each attempt they fail on either the pre-read or post-read. All three drives are passing extended SMART tests. I've tried on 2 different SATA ports (with different cables). Did something change in one of the recent updates? I'm having a difficult time believing that I managed to acquire 3 brand new drives that are all bad.
  3. I'm running v6.2.0 and trying to upgrade to 6.2.4. My 'Check for Updates' from the Plugins screen returns no option to upgrade. Does anyone know the URL for v6.2.4 that I can paste into the 'Install Plugin' page?
  4. Yes! That did it. Data is rebuilding right now. Thank you for the quick answer. I could get the rebuild started before leaving to work for the day, and that makes me much more comfortable. This procedure makes sense, but until someone recommended it I was a bit hesitant to restart the array with the drive unassigned. But,it was no problem at all. I was thinking the same thing. Until I have a chance to tear the hardware apart and check the cabling I've already marked the slot as bad. Once my array is in a 'protected' mode again I'm going to pre-clear and test the old drive.
  5. Do you mean you tried to rebuild it again but it won't let you? When the array is stopped, the server recognizes the drive as assigned to disk10 and next to the Start button is the message "Stopped. Configuration is Valid". There is no option to start the rebuild process. So I press the start button, the array starts, but disk10 is emulated. I don't know what to do next.
  6. I attached a current Diagnostics. I very much appreciate any input because I'm currently running un-fault tollerant and not sure how to repair my situation. unraid-diagnostics-20150821-0007.zip
  7. Ok. Now, I'm a little frustrated. I replaced the drive. The rebuild completed. I started a parity check... And now, the new drive is disabled before parity could finish. Come to think of it, the first drive "failed" during a parity check. It seems the port (cabling?) is the problem, not the drive. UPDATE: I have other slots I could use so I stopped the server, shutdown and moved the drive to another slot. I didn't unassign the drive prior to shutting down, because it was already unassigned. After restarting, the server recognized the drive in the different slot and automatically re-assigned it as disk10 (its prior assignment). The GUI even says that the "configuration is valid". But, when I start the server disk10 has a status of "Device is Disabled, Contents Emulated". I'm not sure what I should do. It's not accepting the drive and it's not rebuilding it either.
  8. Well, I followed your procedure and it is working (rebuilding right now). I was a little confused looking for a check box on the same page where I assigned the new drive to the slot. It took me a minute to realize I had to change to another page to find the checkbox under the button to start the array. But, I still think the instructions are accurate. Maybe I'm just the kind of person who needs more pictures.
  9. Yes. Drives 5-8 are empty slots as a result my drive consolidation from smaller to larger drives. Drive10 is my failed drive.
  10. I woke up this morning to a failed drive Right now, I'm in the process of copying all contents to another location on my array and then I'll swap the drive out for a new one. Is the procedure here still accurate for v6.1 rc5: https://lime-technology.com/wiki/index.php/Replacing_a_Data_Drive I was able to catch diagnostics with a syslog that shows the failure event. Can anyone tell me if the nature of the drive failure can be determined. unraid-diagnostics-20150818-0737.zip
  11. I disabled unMenu and restarted. It looks like my shares are back. It's a shame that unMenu isn't compatible with the most recent RC.
  12. After upgrading to RC5, all but one of my shares are gone. Drives show up in Windows Explorer, but shares don't. Also unMenu shows "Array is Stopped" when it isn't. On my flash drive, under /config/shares all .cfg files for my shares appear fine. I don't think I've ever seen this issue before. Does anyone have any advice? My instinct is to restart the server, but I thought I'd better not make any move until after I ask for assistance. unraid-diagnostics-20150817-1312.zip
  13. Thank you Jon. I did as you suggested. But, I'm left with a couple of questions: 1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No". Why doesn't that directory get deleted but the Docker directory does? 2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?
  14. Can you share your diagnostics file (see on the webGui under Tools -> Diagnostics)? And when you say your "docker folder" can you elaborate on that? I've attached the diagnostics file to this post. When I say "docker folder", I'm talking about the folder where my docker.img is saved. Currently it is: /mnt/cache/docker. unraid-diagnostics-20150813-0734.zip
  15. Just upgraded to rc3. Everything is fine except for my usual upgrade issue... Every time I upgrade my docker folder is deleted. I keep it in /mnt/cache/docker. I also have a /mnt/cache/appdata folder, but that folder isn't ever affected. After upgrading I always have disable docker in settings, recreate the /mnt/cache/docker folder, re-enable docker in settings and then reload all my docker containers. Is this normal behavior or should I expect that my docker config survive unRAID version upgrades?