JarDo

Members
  • Posts

    377
  • Joined

  • Last visited

Everything posted by JarDo

  1. I am on v6.8.3. How do I spin down unassigned devices. I seem to remember I could click on the green orb and that would do it, but the orbs on my unassigned devices are already grey but my unassigned drives are showing a temp. I have to assume if the drives are showing a temp that they are not spun down.
  2. I've precleared several drives with this plugin in the past with no issue. Lately I've been trying to preclear 3 new Hitachi HDS5C4040ALE630 4TB drives. On each attempt they fail on either the pre-read or post-read. All three drives are passing extended SMART tests. I've tried on 2 different SATA ports (with different cables). Did something change in one of the recent updates? I'm having a difficult time believing that I managed to acquire 3 brand new drives that are all bad.
  3. I'm running v6.2.0 and trying to upgrade to 6.2.4. My 'Check for Updates' from the Plugins screen returns no option to upgrade. Does anyone know the URL for v6.2.4 that I can paste into the 'Install Plugin' page?
  4. Yes! That did it. Data is rebuilding right now. Thank you for the quick answer. I could get the rebuild started before leaving to work for the day, and that makes me much more comfortable. This procedure makes sense, but until someone recommended it I was a bit hesitant to restart the array with the drive unassigned. But,it was no problem at all. I was thinking the same thing. Until I have a chance to tear the hardware apart and check the cabling I've already marked the slot as bad. Once my array is in a 'protected' mode again I'm going to pre-clear and test the old drive.
  5. Do you mean you tried to rebuild it again but it won't let you? When the array is stopped, the server recognizes the drive as assigned to disk10 and next to the Start button is the message "Stopped. Configuration is Valid". There is no option to start the rebuild process. So I press the start button, the array starts, but disk10 is emulated. I don't know what to do next.
  6. I attached a current Diagnostics. I very much appreciate any input because I'm currently running un-fault tollerant and not sure how to repair my situation. unraid-diagnostics-20150821-0007.zip
  7. Ok. Now, I'm a little frustrated. I replaced the drive. The rebuild completed. I started a parity check... And now, the new drive is disabled before parity could finish. Come to think of it, the first drive "failed" during a parity check. It seems the port (cabling?) is the problem, not the drive. UPDATE: I have other slots I could use so I stopped the server, shutdown and moved the drive to another slot. I didn't unassign the drive prior to shutting down, because it was already unassigned. After restarting, the server recognized the drive in the different slot and automatically re-assigned it as disk10 (its prior assignment). The GUI even says that the "configuration is valid". But, when I start the server disk10 has a status of "Device is Disabled, Contents Emulated". I'm not sure what I should do. It's not accepting the drive and it's not rebuilding it either.
  8. Well, I followed your procedure and it is working (rebuilding right now). I was a little confused looking for a check box on the same page where I assigned the new drive to the slot. It took me a minute to realize I had to change to another page to find the checkbox under the button to start the array. But, I still think the instructions are accurate. Maybe I'm just the kind of person who needs more pictures.
  9. Yes. Drives 5-8 are empty slots as a result my drive consolidation from smaller to larger drives. Drive10 is my failed drive.
  10. I woke up this morning to a failed drive Right now, I'm in the process of copying all contents to another location on my array and then I'll swap the drive out for a new one. Is the procedure here still accurate for v6.1 rc5: https://lime-technology.com/wiki/index.php/Replacing_a_Data_Drive I was able to catch diagnostics with a syslog that shows the failure event. Can anyone tell me if the nature of the drive failure can be determined. unraid-diagnostics-20150818-0737.zip
  11. I disabled unMenu and restarted. It looks like my shares are back. It's a shame that unMenu isn't compatible with the most recent RC.
  12. After upgrading to RC5, all but one of my shares are gone. Drives show up in Windows Explorer, but shares don't. Also unMenu shows "Array is Stopped" when it isn't. On my flash drive, under /config/shares all .cfg files for my shares appear fine. I don't think I've ever seen this issue before. Does anyone have any advice? My instinct is to restart the server, but I thought I'd better not make any move until after I ask for assistance. unraid-diagnostics-20150817-1312.zip
  13. Thank you Jon. I did as you suggested. But, I'm left with a couple of questions: 1) I have another directory that sits at the same level as my Docker directory on the cache drive and it too has its "use cache" setting set to "No". Why doesn't that directory get deleted but the Docker directory does? 2) What is the benefit of a "use cache" setting of "No" if it will result in directories being unintentionally deleted?
  14. Can you share your diagnostics file (see on the webGui under Tools -> Diagnostics)? And when you say your "docker folder" can you elaborate on that? I've attached the diagnostics file to this post. When I say "docker folder", I'm talking about the folder where my docker.img is saved. Currently it is: /mnt/cache/docker. unraid-diagnostics-20150813-0734.zip
  15. Just upgraded to rc3. Everything is fine except for my usual upgrade issue... Every time I upgrade my docker folder is deleted. I keep it in /mnt/cache/docker. I also have a /mnt/cache/appdata folder, but that folder isn't ever affected. After upgrading I always have disable docker in settings, recreate the /mnt/cache/docker folder, re-enable docker in settings and then reload all my docker containers. Is this normal behavior or should I expect that my docker config survive unRAID version upgrades?
  16. I upgraded to RC3. Everything went smooth except I no longer have a Docker tab. Docker was enabled. I disabled and re-enabled, but that didn't help. Any suggestions? UPDATE: It seems that my docker.img file disappeared upon reboot after upgrade. I don't know why. I reset and recreated the image file and all my dockers. Seems to be working now.
  17. Thank you for the answers. I understand very well now. I restarted the zeroing process. I guess it's a good thing that I lost power. Before, dd was running at about 3Mib/s and the ETA was 300 hours. After the restart dd is now running at about 20Mib/s and the ETA is under 60 hours.
  18. I have a 4TB drive that I'm trying to remove from my array. The drive was un-mounted and I was running dd to zero the drive first so I could remove it without losing parity. Tonight, before the dd operation could finish, I had a power outage. I restarted the server and after starting the array I noticed that the drive I was zeroing now shows as unformatted. A parity check is running right now because the server detected an unclean shutdown. My question... After the parity check is finished should I be able to remove an unformatted disk from the array without affecting parity? My concern is that I'm certain the drive was not completely zeroed.
  19. You can write to the array during the zeroing process. This is exactly one of the benefits of this procedure. What happened in this case was that while disk1 was unmounted from the array and being zeroed, a cron job that I had forgotten to disable ran and wrote to a share that was excluded on all disks and only included on disk1. In my mind, that would mean that the write would fail because disk1 was unmounted. I use umount /dev/md? instead of umount /mnt/disk? because that is what another more knowledgeable user suggested to me. I never tried /mnt/disk?. Maybe that would work too.
  20. You're out of space on the Docker image file. Okay. I understand now. The Docker image file is a fixed size virtual disk. Look like it is currently 10.0GiB and I filled it up. Sorry, I realize I'm off-topic now but does anyone know how I can increase the size of the image file?
  21. Trying to install and I get an error: root@localhost:# /usr/bin/docker run -d --name="ownCloud" --net="bridge" -e SUBJECT="/C=US/ST=CA/L=City/O=Organization/OU=Organization Unit/CN=yourhome.com" -e EDGE="1" -e TZ="America/Los_Angeles" -p 8000:8000/tcp -v "/mnt/user":"/mnt/user":rw -v "/mnt/user/appdata/owncloud":"/var/www/owncloud/data":rw gfjardim/owncloud Unable to find image 'gfjardim/owncloud' locally Pulling repository gfjardim/owncloud 2015/01/31 10:17:54 Error pulling image (latest) from gfjardim/owncloud, Error storing image size in /var/lib/docker/graph/_tmp/ce0bd171b3fcb2bc2b0f4de73e20f3493d76c2e9bac1a874097a316cc4d2408e/layersize: write /var/lib/docker/graph/_tmp/ce0bd171b3fcb2bc2b0f4de73e20f3493d76c2e9bac1a874097a316cc4d2408e/layersize: no space left on device The command failed. Any ideas? I am certain that drive space on my server is not a problem.
  22. Well... maybe... but maybe not. The preclear report said: == Disk /dev/sdi has NOT been successfully precleared == Postread detected un-expected non-zero bytes on disk== Basically, it wrote all zeros to the disk, but when it went to read them back to verify the write was successful, there were some bytes that were not zero. That is a very bad thing, since you cannot rely on the disk to store your data accurately. Nor can you rebuild any other failed disk accurately if one were to fail. I would try a parity check at this point. It might have an error or two as it expected all zeros... Joe L. I completed a parity check. Almost as quickly as i started the check it found and corrected 1 error. It completed in 14 hours without finding any more errors. I'm thinking 1 error is nothing to worry about. I'm not sure how to get any more assurance that my drives are okay. These were the first pre-clears that I performed on v6b12. I find it strange that I got the same result on two different brand new drives.
  23. UPDATE1: As an update to my original question below... I restarted the zeroing process and let it finish. After it was finished disk1 still contained a share that I could access from client PC's even though the disk was completely overwritten with zeros!! I'm guessing that this share must exist entirely in parity because there is no way it could exist on the physical disk. Faced with no other option other than proceeding with the steps in the procedure I continued with steps 6 through 9. The only deviation from the procedure as written is that I had to power down the server and restart because it hung when I tried to stop the array. At this moment the drive has been removed as an array device, the array is started, and a parity check is in progress. Currently at 10% with 1 corrected sync error. One thing to note about unRAID v6b12 that I never saw before... Now that disk1 has been removed there is a red Alert box in the top right corner informing me that Disk 1 is in an Error State. I find this very odd since I performed a 'New Config' operation after removing Disk1 from the array. So far, so good. I will let the parity check finish and then I'll move on to step 11 and finish up the steps to physically remove the old drive from the server. ORIGINAL POST:
  24. That's a good question. I checked and the job writes to a share that is exclusive to disk1. In this case the write is to /mnt/user/backup/, where '/backup/' is the share.
  25. Ok... First off... unless it were some kind of emergency I would never replace 3 drives at the same time. But that's just me. My procedure does assume that you are working on 1 drive at a time. Second... I don't mean to say that you would re-assign disk6 to be disk2. The idea is to move (copy... whatever) disk2 data to disk6 so that disk2 can be zeroed. If you plan to replace (i.e. upgrade) disk2 with new hardware then you could then move the data you (temporarily) moved to disk6 back onto disk2 after the new hardware is installed. If you never plan to replace the old disk2 with new hardware then you are done. No need to reassign any disks. In a nutshell what you are doing is this: 1) Moving your data off of one drive to another drive so that the first drive can be decommissioned. 2) Decommissioning the drive 3) Removing the drive from the server 4) Starting the server and telling unRAID that the old drive is no longer part of the array. You can stop here if you like, but if your purpose is to upgrade a drive then... 5) Add new hard drive to the system 6) Move your data back onto the new hard drive.