JarDo

Members
  • Posts

    377
  • Joined

  • Last visited

Posts posted by JarDo

  1. I am on v6.8.3.  How do I spin down unassigned devices.  I seem to remember I could click on the green orb and that would do it, but the orbs on my unassigned devices are already grey but my unassigned drives are showing a temp.  I have to assume if the drives are showing a temp that they are not spun down.

  2. I've precleared several drives with this plugin in the past with no issue.  Lately I've been trying to preclear 3 new Hitachi HDS5C4040ALE630 4TB drives.  On each attempt they fail on either the pre-read or post-read.  All three drives are passing extended SMART tests.  I've tried on 2 different SATA ports (with different cables).  Did something change in one of the recent updates?  I'm having a difficult time believing that I managed to acquire 3 brand new drives that are all bad.

  3. you have to unassign the drive, start the array, stop the array, re-assign the drive, start the array.

     

    This is needed to get unRAID to "forget" the drive that was in slot10. By re-assigning it, it makes unRAID think you've installed a new drive and it rebuilds the contents onto that drive.

    Yes!  That did it.  Data is rebuilding right now.  Thank you for the quick answer.  I could get the rebuild started before leaving to work for the day, and that makes me much more comfortable.

     

    This procedure makes sense, but until someone recommended it I was a bit hesitant to restart the array with the drive unassigned.  But,it was no problem at all. 

     

    I would run your old drive over a few preclears (once disk10 is rebuilt) to see if its still good, then re-add it to the array.

     

    If this is true, make sure to mark that original slot as defective so you don't use it in the future.

     

    I was thinking the same thing.  Until I have a chance to tear the hardware apart and check the cabling I've already marked the slot as bad.  Once my array is in a 'protected' mode again I'm going to pre-clear and test the old drive.

  4. Seems odd that it would have said valid and then decided to disable it after starting. When you say

    ...  It's not accepting the drive and it's not rebuilding it either.

    Do you mean you tried to rebuild it again but it won't let you?

    When the array is stopped, the server recognizes the drive as assigned to disk10 and next to the Start button is the message "Stopped. Configuration is Valid".  There is no option to start the rebuild process.  So I press the start button, the array starts, but disk10 is emulated.  I don't know what to do next.

  5. Ok.  Now, I'm a little frustrated.

     

    I replaced the drive.  The rebuild completed.  I started a parity check...  And now, the new drive is disabled before parity could finish.

     

    Come to think of it, the first drive "failed" during a parity check. 

     

    It seems the port (cabling?) is the problem, not the drive.

     

    UPDATE:

    I have other slots I could use so I stopped the server, shutdown and moved the drive to another slot.  I didn't unassign the drive prior to shutting down, because it was already unassigned.

     

    After restarting, the server recognized the drive in the different slot and automatically re-assigned it as disk10 (its prior assignment).  The GUI even says that the "configuration is valid".

     

    But, when I start the server disk10 has a status of "Device is Disabled, Contents Emulated". 

     

    I'm not sure what I should do.  It's not accepting the drive and it's not rebuilding it either.

  6. I just finished rewriting that page a day or 2 ago (except the last section), to bring it up to date.  Let me know if you have any problems with it, or any suggestions for improvement.

     

    Well, I followed your procedure and it is working (rebuilding right now). 

     

    I was a little confused looking for a check box on the same page where I assigned the new drive to the slot.  It took me a minute to realize I had to change to another page to find the checkbox under the button to start the array.

     

    But, I still think the instructions are accurate.  Maybe I'm just the kind of person who needs more pictures.

  7. According to your diagnostics, you have drives 1-4 OK, then no drives 5-8, then drive 9 OK and drive 10 disabled.

     

    Is this true that you have skipped slots 5-8?

     

    Also, there is no smart data for the disabled drive10 = WDC_WD30EZRX-00DC0B0_WD-WMC1T1583275

     

    Yes.  Drives 5-8 are empty slots as a result my drive consolidation from smaller to larger drives.  Drive10 is my failed drive.

  8. I woke up this morning to a failed drive  :-[

     

    Right now, I'm in the process of copying all contents to another location on my array and then I'll swap the drive out for a new one.

     

    Is the procedure here still accurate for v6.1 rc5:

     

    https://lime-technology.com/wiki/index.php/Replacing_a_Data_Drive

     

    I was able to catch diagnostics with a syslog that shows the failure event.  Can anyone tell me if the nature of the drive failure can be determined. 

     

    unraid-diagnostics-20150818-0737.zip

  9. After upgrading to RC5, all but one of my shares are gone. 

     

    Drives show up in Windows Explorer, but shares don't. 

     

    Also unMenu shows "Array is Stopped" when it isn't.

     

    On my flash drive, under /config/shares all .cfg files for my shares appear fine.

     

    I don't think I've ever seen this issue before.  Does anyone have any advice?

     

    My instinct is to restart the server, but I thought I'd better not make any move until after I ask for assistance.

    unraid-diagnostics-20150817-1312.zip

  10. Your issue is that the share that you use for docker is NOT set to use the cache.  Login to the unRAID webGui and click on the "Shares" tab at the top.  Then click on docker.  Change the "use cache" option from No to Only.  This should resolve your issue going forward.

     

    This problem stems from creating folders off the root path of a device (e.g. /mnt/cache).  You really shouldn't be doing that unless you don't intend to use shares at all.  Rather, you should be creating shares.  If you want a /mnt/cache/docker and don't want that share to ever contain data on array devices, then you simply create a Cache Only share.

     

    Thank you Jon.  I did as you suggested.  But,  I'm left with a couple of questions:

     

    1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No".  Why doesn't that directory get deleted but the Docker directory does?

     

    2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?

     

  11. Just upgraded to rc3.  Everything is fine except for my usual upgrade issue...

     

    Every time I upgrade my docker folder is deleted.  I keep it in /mnt/cache/docker.  I also have a /mnt/cache/appdata folder, but that folder isn't ever affected.

     

    After upgrading I always have disable docker in settings, recreate the /mnt/cache/docker folder, re-enable docker in settings and then reload all my docker containers.

     

    Is this normal behavior or should I expect that my docker config survive unRAID version upgrades?

     

    Can you share your diagnostics file (see on the webGui under Tools -> Diagnostics)?  And when you say your "docker folder" can you elaborate on that?

     

    I've attached the diagnostics file to this post.  When I say "docker folder", I'm talking about the folder where my docker.img is saved.  Currently it is: /mnt/cache/docker.

    unraid-diagnostics-20150813-0734.zip

  12. Just upgraded to rc3.  Everything is fine except for my usual upgrade issue...

     

    Every time I upgrade my docker folder is deleted.  I keep it in /mnt/cache/docker.  I also have a /mnt/cache/appdata folder, but that folder isn't ever affected.

     

    After upgrading I always have disable docker in settings, recreate the /mnt/cache/docker folder, re-enable docker in settings and then reload all my docker containers.

     

    Is this normal behavior or should I expect that my docker config survive unRAID version upgrades?

  13. I upgraded to RC3.  Everything went smooth except I no longer have a Docker tab.  Docker was enabled.  I disabled and re-enabled, but that didn't help.  Any suggestions?

     

    UPDATE:

    It seems that my docker.img file disappeared upon reboot after upgrade.  I don't know why.  I reset and recreated the image file and all my dockers.  Seems to be working now.

  14. Thank you for the answers.  I understand very well now.  I restarted the zeroing process.  I guess it's a good thing that I lost power.  Before, dd was running at about 3Mib/s and the ETA was 300 hours.  After the restart dd is now running at about 20Mib/s and the ETA is under 60 hours.

     

  15. I have a 4TB drive that I'm trying to remove from my array.

     

    The drive was un-mounted and I was running dd to zero the drive first so I could remove it without losing parity.

     

    Tonight, before the dd operation could finish, I had a power outage.

     

    I restarted the server and after starting the array I noticed that the drive I was zeroing now shows as unformatted.

     

    A parity check is running right now because the server detected an unclean shutdown.

     

    My question...  After the parity check is finished should I be able to remove an unformatted disk from the array without affecting parity?  My concern is that I'm certain the drive was not completely zeroed.

  16. Hmm.  So that would mean that you cannot write to the array during the zeroing process which is what I thought the advantage to your method was in the first place.  Did you get the same behavior on version 5?

    One more question: why did you use umount /dev/md? instead of umount /mnt/disk? to unmount the disk to be removed?

     

    You can write to the array during the zeroing process.  This is exactly one of the benefits of this procedure.  What happened in this case was that while disk1 was unmounted from the array and being zeroed, a cron job that I had forgotten to disable ran and wrote to a share that was excluded on all disks and only included on disk1.  In my mind, that would mean that the write would fail because disk1 was unmounted.

     

    I use umount /dev/md? instead of umount /mnt/disk? because that is what another more knowledgeable user suggested to me.  I never tried /mnt/disk?.  Maybe that would work too.

  17. Trying to install and I get an error:

     

    root@localhost:# /usr/bin/docker run -d --name="ownCloud" --net="bridge" -e SUBJECT="/C=US/ST=CA/L=City/O=Organization/OU=Organization Unit/CN=yourhome.com" -e EDGE="1" -e TZ="America/Los_Angeles" -p 8000:8000/tcp -v "/mnt/user":"/mnt/user":rw -v "/mnt/user/appdata/owncloud":"/var/www/owncloud/data":rw gfjardim/owncloud
    Unable to find image 'gfjardim/owncloud' locally
    Pulling repository gfjardim/owncloud
    2015/01/31 10:17:54 Error pulling image (latest) from gfjardim/owncloud, Error storing image size in /var/lib/docker/graph/_tmp/ce0bd171b3fcb2bc2b0f4de73e20f3493d76c2e9bac1a874097a316cc4d2408e/layersize: write /var/lib/docker/graph/_tmp/ce0bd171b3fcb2bc2b0f4de73e20f3493d76c2e9bac1a874097a316cc4d2408e/layersize: no space left on device
    
    The command failed.

     

    Any ideas?  I am certain that drive space on my server is not a problem.

     

    You're out of space on the Docker image file.

     

    Okay.  I understand now.  The Docker image file is a fixed size virtual disk.  Look like it is currently 10.0GiB and I filled it up.  Sorry, I realize I'm off-topic now but does anyone know how I can increase the size of the image file?

  18. Trying to install and I get an error:

     

    root@localhost:# /usr/bin/docker run -d --name="ownCloud" --net="bridge" -e SUBJECT="/C=US/ST=CA/L=City/O=Organization/OU=Organization Unit/CN=yourhome.com" -e EDGE="1" -e TZ="America/Los_Angeles" -p 8000:8000/tcp -v "/mnt/user":"/mnt/user":rw -v "/mnt/user/appdata/owncloud":"/var/www/owncloud/data":rw gfjardim/owncloud
    Unable to find image 'gfjardim/owncloud' locally
    Pulling repository gfjardim/owncloud
    2015/01/31 10:17:54 Error pulling image (latest) from gfjardim/owncloud, Error storing image size in /var/lib/docker/graph/_tmp/ce0bd171b3fcb2bc2b0f4de73e20f3493d76c2e9bac1a874097a316cc4d2408e/layersize: write /var/lib/docker/graph/_tmp/ce0bd171b3fcb2bc2b0f4de73e20f3493d76c2e9bac1a874097a316cc4d2408e/layersize: no space left on device
    
    The command failed.

     

    Any ideas?  I am certain that drive space on my server is not a problem.

  19. So,  I just finished preclearing 2 drives.  In both cases, the result was such as "Disk /dev/sdk has NOT been successfully precleared".  I don't fully understand why.  I tried adding one of the precleared drives to my array and there was no issue.  I've attached the preclear logs. 

     

    I'm using preclear_disk.sh v1.15 on unRAID v6b12.

     

    If I'm able to add the drives to my array with no problem, should I be Okay?

    Well... maybe... but maybe not.

     

     

    The preclear report said:

    == Disk /dev/sdi has NOT been successfully precleared

    == Postread detected un-expected non-zero bytes on disk==

     

    Basically, it wrote all zeros to the disk, but when it went to read them back to verify the write was successful, there were some bytes that were not zero.

     

    That is a very bad thing, since you cannot rely on the disk to store your data accurately.  Nor can you rebuild any other failed disk accurately if one were to fail.

     

    I would try a parity check at this point.  It might have an error or two as it expected all zeros...

     

    Joe L.

     

    I completed a parity check.  Almost as quickly as i started the check it found and corrected 1 error.  It completed in 14 hours without finding any more errors.  I'm thinking 1 error is nothing to worry about.  I'm not sure how to get any more assurance that my drives are okay.

     

    These were the first pre-clears that I performed on v6b12.  I find it strange that I got the same result on two different brand new drives.

  20. UPDATE1:

     

    As an update to my original question below...

     

    I restarted the zeroing process and let it finish.  After it was finished disk1 still contained a share that I could access from client PC's even though the disk was completely overwritten with zeros!!

     

    I'm guessing that this share must exist entirely in parity because there is no way it could exist on the physical disk. 

     

    Faced with no other option other than proceeding with the steps in the procedure I continued with steps 6 through 9.  The only deviation from the procedure as written is that I had to power down the server and restart because it hung when I tried to stop the array.  At this moment the drive has been removed as an array device, the array is started, and a parity check is in progress.  Currently at 10% with 1 corrected sync error.

     

    One thing to note about unRAID v6b12 that I never saw before... Now that disk1 has been removed there is a red Alert box in the top right corner informing me that Disk 1 is in an Error State.  I find this very odd since I performed a 'New Config' operation after removing Disk1 from the array.

     

    So far, so good.  I will let the parity check finish and then I'll move on to step 11 and finish up the steps to physically remove the old drive from the server.

     

    ORIGINAL POST:

    Ok.  Now I've got a question for you all.

     

    I am currently following the procedure I posted above and something occurred that has me puzzled.

     

    I have a drive umount'd and currently being zeroed.  We'll call this drive disk1. 

     

    Overnight, a cron job I forgot to disable performed a backup of my flash drive to disk1.  Now disk1 is visible as a mapped drive with the contents of the backup job.

     

    When I discovered this I stopped and restarted the zeroing process.  I'm not sure what else I can do.  I'm assuming these files don't actually exist on disk1 but do exist in parity.

     

    Does this behavior make sense.  I had assumed that any attempt to write to an unmounted drive would have failed.  Is this behavior normal or is this an issue with unRAID v6?  This is the first time I've tried to zero a drive with version 6.

     

    Another thing I noticed with v6 is that after unmounting the drive the webgui did not show the drive with zero free space.  unMenu does show zero free space, but not the stock webgui.

     

    Any thoughts?

  21. Did the cron job target disk1 explicitly or was it a write to a share that included disk1?  If it is the latter case, then I don't see how your method would allow use of the array while you are "zeroing" the to-be-removed disk.

     

    That's a good question.  I checked and the job writes to a share that is exclusive to disk1.  In this case the write is to /mnt/user/backup/, where '/backup/' is the share.

     

  22. JarDo,

     

    Thanks again.  I think I now understand, but I don't see how I can use your method.

    Correct me if I am wrong, but it seems as if you use your method to *replace* a single drive with a larger one?

    This means that if, for instance, disk2 is a 1 TB drive and you do your copy to a new drive, say disk6 and then do the zero copy on disk2 to preserve parity and then re-assign disk6 to be disk2.  Is this right?  If so, then I can't use it because I intend to replace 3 drives at the same time.  This means I *must* re-assign all the drives.  If I am wrong, then I still  don't understand exactly what you are doing.

     

    Thanks again.

     

    Ok...  First off... unless it were some kind of emergency I would never replace 3 drives at the same time.  But that's just me.  My procedure does assume that you are working on 1 drive at a time.

     

    Second... I don't mean to say that you would re-assign disk6 to be disk2.  The idea is to move (copy... whatever) disk2 data to disk6 so that disk2 can be zeroed.  If you plan to replace (i.e. upgrade) disk2 with new hardware then you could then move the data you (temporarily) moved to disk6 back onto disk2 after the new hardware is installed.  If you never plan to replace the old disk2 with new hardware then you are done.  No need to reassign any disks.

     

    In a nutshell what you are doing is this:

     

    1) Moving your data off of one drive to another drive so that the first drive can be decommissioned.

    2) Decommissioning the drive

    3) Removing the drive from the server

    4) Starting the server and telling unRAID that the old drive is no longer part of the array.

     

    You can stop here if you like, but if your purpose is to upgrade a drive then...

     

    5) Add new hard drive to the system

    6) Move your data back onto the new hard drive.